Nov 8 00:27:52.102546 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:27:52.102595 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:27:52.102614 kernel: BIOS-provided physical RAM map: Nov 8 00:27:52.102629 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Nov 8 00:27:52.102644 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Nov 8 00:27:52.102657 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Nov 8 00:27:52.102674 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Nov 8 00:27:52.102693 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Nov 8 00:27:52.102708 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Nov 8 00:27:52.102722 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Nov 8 00:27:52.102736 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Nov 8 00:27:52.102748 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Nov 8 00:27:52.102761 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Nov 8 00:27:52.102775 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Nov 8 00:27:52.102797 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Nov 8 00:27:52.102814 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Nov 8 00:27:52.102830 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Nov 8 00:27:52.102844 kernel: NX (Execute Disable) protection: active Nov 8 00:27:52.102858 kernel: APIC: Static calls initialized Nov 8 00:27:52.102873 kernel: efi: EFI v2.7 by EDK II Nov 8 00:27:52.102887 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd323018 Nov 8 00:27:52.102904 kernel: SMBIOS 2.4 present. Nov 8 00:27:52.102919 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025 Nov 8 00:27:52.102933 kernel: Hypervisor detected: KVM Nov 8 00:27:52.102954 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 8 00:27:52.102970 kernel: kvm-clock: using sched offset of 12840160305 cycles Nov 8 00:27:52.102988 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 8 00:27:52.103006 kernel: tsc: Detected 2299.998 MHz processor Nov 8 00:27:52.103023 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:27:52.103041 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:27:52.103058 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Nov 8 00:27:52.103077 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Nov 8 00:27:52.103093 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:27:52.103121 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Nov 8 00:27:52.103136 kernel: Using GB pages for direct mapping Nov 8 00:27:52.103230 kernel: Secure boot disabled Nov 8 00:27:52.103244 kernel: ACPI: Early table checksum verification disabled Nov 8 00:27:52.103258 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Nov 8 00:27:52.103273 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Nov 8 00:27:52.103288 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Nov 8 00:27:52.103311 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Nov 8 00:27:52.103330 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Nov 8 00:27:52.103363 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Nov 8 00:27:52.103380 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Nov 8 00:27:52.103396 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Nov 8 00:27:52.103412 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Nov 8 00:27:52.103429 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Nov 8 00:27:52.103450 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Nov 8 00:27:52.103466 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Nov 8 00:27:52.103482 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Nov 8 00:27:52.103499 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Nov 8 00:27:52.103516 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Nov 8 00:27:52.103532 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Nov 8 00:27:52.103549 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Nov 8 00:27:52.103564 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Nov 8 00:27:52.103580 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Nov 8 00:27:52.103601 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Nov 8 00:27:52.103618 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 8 00:27:52.103636 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 8 00:27:52.103654 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 8 00:27:52.103671 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Nov 8 00:27:52.103689 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Nov 8 00:27:52.103707 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Nov 8 00:27:52.103724 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Nov 8 00:27:52.103741 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Nov 8 00:27:52.103764 kernel: Zone ranges: Nov 8 00:27:52.103781 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:27:52.103798 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 8 00:27:52.103815 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Nov 8 00:27:52.103833 kernel: Movable zone start for each node Nov 8 00:27:52.103850 kernel: Early memory node ranges Nov 8 00:27:52.103867 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Nov 8 00:27:52.103885 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Nov 8 00:27:52.103902 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Nov 8 00:27:52.103924 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Nov 8 00:27:52.103941 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Nov 8 00:27:52.103957 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Nov 8 00:27:52.103974 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:27:52.103992 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Nov 8 00:27:52.104009 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Nov 8 00:27:52.104027 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Nov 8 00:27:52.104044 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Nov 8 00:27:52.104062 kernel: ACPI: PM-Timer IO Port: 0xb008 Nov 8 00:27:52.104083 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 8 00:27:52.104100 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:27:52.104117 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 8 00:27:52.104135 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:27:52.104179 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 8 00:27:52.104196 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 8 00:27:52.104213 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:27:52.104231 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 8 00:27:52.104248 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Nov 8 00:27:52.104270 kernel: Booting paravirtualized kernel on KVM Nov 8 00:27:52.104289 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:27:52.104306 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 8 00:27:52.104323 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 8 00:27:52.104340 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 8 00:27:52.104364 kernel: pcpu-alloc: [0] 0 1 Nov 8 00:27:52.104380 kernel: kvm-guest: PV spinlocks enabled Nov 8 00:27:52.104397 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 8 00:27:52.104415 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:27:52.104448 kernel: random: crng init done Nov 8 00:27:52.104478 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 8 00:27:52.104494 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 8 00:27:52.104511 kernel: Fallback order for Node 0: 0 Nov 8 00:27:52.104528 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Nov 8 00:27:52.104544 kernel: Policy zone: Normal Nov 8 00:27:52.104562 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:27:52.104579 kernel: software IO TLB: area num 2. Nov 8 00:27:52.104602 kernel: Memory: 7513384K/7860584K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 346940K reserved, 0K cma-reserved) Nov 8 00:27:52.104620 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 8 00:27:52.104638 kernel: Kernel/User page tables isolation: enabled Nov 8 00:27:52.104656 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:27:52.104674 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:27:52.104691 kernel: Dynamic Preempt: voluntary Nov 8 00:27:52.104707 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:27:52.104726 kernel: rcu: RCU event tracing is enabled. Nov 8 00:27:52.104745 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 8 00:27:52.104782 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:27:52.104802 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:27:52.104821 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:27:52.104844 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:27:52.104862 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 8 00:27:52.104881 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 8 00:27:52.104900 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:27:52.104919 kernel: Console: colour dummy device 80x25 Nov 8 00:27:52.104943 kernel: printk: console [ttyS0] enabled Nov 8 00:27:52.104962 kernel: ACPI: Core revision 20230628 Nov 8 00:27:52.104981 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:27:52.105000 kernel: x2apic enabled Nov 8 00:27:52.105019 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:27:52.105038 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Nov 8 00:27:52.105058 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Nov 8 00:27:52.105077 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Nov 8 00:27:52.105096 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Nov 8 00:27:52.105120 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Nov 8 00:27:52.105139 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:27:52.105191 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 8 00:27:52.105207 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 8 00:27:52.105223 kernel: Spectre V2 : Mitigation: IBRS Nov 8 00:27:52.105239 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:27:52.105256 kernel: RETBleed: Mitigation: IBRS Nov 8 00:27:52.105275 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 00:27:52.105293 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Nov 8 00:27:52.105318 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 00:27:52.105336 kernel: MDS: Mitigation: Clear CPU buffers Nov 8 00:27:52.105364 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:27:52.105382 kernel: active return thunk: its_return_thunk Nov 8 00:27:52.105401 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 8 00:27:52.105420 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:27:52.105440 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:27:52.105460 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:27:52.105476 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:27:52.105499 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 8 00:27:52.105518 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:27:52.105537 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:27:52.105555 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:27:52.105574 kernel: landlock: Up and running. Nov 8 00:27:52.105593 kernel: SELinux: Initializing. Nov 8 00:27:52.105611 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 00:27:52.105631 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 00:27:52.105651 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Nov 8 00:27:52.105675 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:27:52.105695 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:27:52.105715 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:27:52.105733 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Nov 8 00:27:52.105752 kernel: signal: max sigframe size: 1776 Nov 8 00:27:52.105770 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:27:52.105790 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:27:52.105808 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 8 00:27:52.105827 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:27:52.105848 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:27:52.105866 kernel: .... node #0, CPUs: #1 Nov 8 00:27:52.105884 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Nov 8 00:27:52.105905 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 8 00:27:52.105924 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:27:52.105943 kernel: smpboot: Max logical packages: 1 Nov 8 00:27:52.105961 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Nov 8 00:27:52.105979 kernel: devtmpfs: initialized Nov 8 00:27:52.106002 kernel: x86/mm: Memory block size: 128MB Nov 8 00:27:52.106021 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Nov 8 00:27:52.106040 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:27:52.106060 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 8 00:27:52.106079 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:27:52.106098 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:27:52.106116 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:27:52.106135 kernel: audit: type=2000 audit(1762561670.345:1): state=initialized audit_enabled=0 res=1 Nov 8 00:27:52.106238 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:27:52.106269 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:27:52.106289 kernel: cpuidle: using governor menu Nov 8 00:27:52.106308 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:27:52.106326 kernel: dca service started, version 1.12.1 Nov 8 00:27:52.106343 kernel: PCI: Using configuration type 1 for base access Nov 8 00:27:52.106370 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:27:52.106388 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:27:52.106407 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:27:52.106425 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:27:52.106448 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:27:52.106467 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:27:52.106485 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:27:52.106504 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:27:52.106523 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Nov 8 00:27:52.106541 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:27:52.106560 kernel: ACPI: Interpreter enabled Nov 8 00:27:52.106578 kernel: ACPI: PM: (supports S0 S3 S5) Nov 8 00:27:52.106597 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:27:52.106619 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:27:52.106638 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 8 00:27:52.106657 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Nov 8 00:27:52.106675 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 8 00:27:52.106951 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:27:52.107159 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 8 00:27:52.107367 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 8 00:27:52.107399 kernel: PCI host bridge to bus 0000:00 Nov 8 00:27:52.107594 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:27:52.107777 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 8 00:27:52.107954 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:27:52.108128 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Nov 8 00:27:52.108322 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 8 00:27:52.108545 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 8 00:27:52.108758 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Nov 8 00:27:52.108964 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Nov 8 00:27:52.109188 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Nov 8 00:27:52.109417 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Nov 8 00:27:52.109623 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Nov 8 00:27:52.109811 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Nov 8 00:27:52.110030 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 8 00:27:52.110262 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Nov 8 00:27:52.112399 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Nov 8 00:27:52.112632 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Nov 8 00:27:52.112833 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Nov 8 00:27:52.113034 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Nov 8 00:27:52.113057 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 8 00:27:52.113083 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 8 00:27:52.113100 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:27:52.113118 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 8 00:27:52.113136 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 8 00:27:52.113275 kernel: iommu: Default domain type: Translated Nov 8 00:27:52.113295 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:27:52.113317 kernel: efivars: Registered efivars operations Nov 8 00:27:52.113335 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:27:52.113360 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:27:52.113384 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Nov 8 00:27:52.113402 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Nov 8 00:27:52.113419 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Nov 8 00:27:52.113436 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Nov 8 00:27:52.113453 kernel: vgaarb: loaded Nov 8 00:27:52.113471 kernel: clocksource: Switched to clocksource kvm-clock Nov 8 00:27:52.113488 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:27:52.113515 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:27:52.113533 kernel: pnp: PnP ACPI init Nov 8 00:27:52.113555 kernel: pnp: PnP ACPI: found 7 devices Nov 8 00:27:52.113573 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:27:52.113592 kernel: NET: Registered PF_INET protocol family Nov 8 00:27:52.113609 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 8 00:27:52.113628 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 8 00:27:52.113646 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:27:52.113665 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:27:52.113683 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 8 00:27:52.113701 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 8 00:27:52.113724 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 8 00:27:52.113743 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 8 00:27:52.113761 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:27:52.113779 kernel: NET: Registered PF_XDP protocol family Nov 8 00:27:52.113980 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 8 00:27:52.114177 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 8 00:27:52.114363 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 8 00:27:52.114549 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Nov 8 00:27:52.114762 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 8 00:27:52.114789 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:27:52.114810 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 8 00:27:52.114830 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Nov 8 00:27:52.114849 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 8 00:27:52.114869 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Nov 8 00:27:52.114889 kernel: clocksource: Switched to clocksource tsc Nov 8 00:27:52.114908 kernel: Initialise system trusted keyrings Nov 8 00:27:52.114932 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 8 00:27:52.114952 kernel: Key type asymmetric registered Nov 8 00:27:52.114972 kernel: Asymmetric key parser 'x509' registered Nov 8 00:27:52.114991 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:27:52.115011 kernel: io scheduler mq-deadline registered Nov 8 00:27:52.115030 kernel: io scheduler kyber registered Nov 8 00:27:52.115050 kernel: io scheduler bfq registered Nov 8 00:27:52.115070 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:27:52.115090 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 8 00:27:52.117354 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Nov 8 00:27:52.117390 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Nov 8 00:27:52.117588 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Nov 8 00:27:52.117615 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 8 00:27:52.117802 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Nov 8 00:27:52.117825 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:27:52.117844 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:27:52.117863 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 8 00:27:52.117882 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Nov 8 00:27:52.117908 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Nov 8 00:27:52.118101 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Nov 8 00:27:52.118127 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 8 00:27:52.118179 kernel: i8042: Warning: Keylock active Nov 8 00:27:52.118198 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:27:52.118217 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:27:52.118423 kernel: rtc_cmos 00:00: RTC can wake from S4 Nov 8 00:27:52.118600 kernel: rtc_cmos 00:00: registered as rtc0 Nov 8 00:27:52.118772 kernel: rtc_cmos 00:00: setting system clock to 2025-11-08T00:27:51 UTC (1762561671) Nov 8 00:27:52.118941 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Nov 8 00:27:52.118964 kernel: intel_pstate: CPU model not supported Nov 8 00:27:52.118982 kernel: pstore: Using crash dump compression: deflate Nov 8 00:27:52.119000 kernel: pstore: Registered efi_pstore as persistent store backend Nov 8 00:27:52.119019 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:27:52.119038 kernel: Segment Routing with IPv6 Nov 8 00:27:52.119054 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:27:52.119078 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:27:52.119094 kernel: Key type dns_resolver registered Nov 8 00:27:52.119111 kernel: IPI shorthand broadcast: enabled Nov 8 00:27:52.119128 kernel: sched_clock: Marking stable (852009577, 147239521)->(1054754586, -55505488) Nov 8 00:27:52.119184 kernel: registered taskstats version 1 Nov 8 00:27:52.119204 kernel: Loading compiled-in X.509 certificates Nov 8 00:27:52.119223 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:27:52.119244 kernel: Key type .fscrypt registered Nov 8 00:27:52.119261 kernel: Key type fscrypt-provisioning registered Nov 8 00:27:52.119282 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:27:52.119300 kernel: ima: No architecture policies found Nov 8 00:27:52.119317 kernel: clk: Disabling unused clocks Nov 8 00:27:52.119335 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:27:52.119362 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:27:52.119380 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:27:52.119398 kernel: Run /init as init process Nov 8 00:27:52.119416 kernel: with arguments: Nov 8 00:27:52.119434 kernel: /init Nov 8 00:27:52.119457 kernel: with environment: Nov 8 00:27:52.119474 kernel: HOME=/ Nov 8 00:27:52.119491 kernel: TERM=linux Nov 8 00:27:52.119510 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Nov 8 00:27:52.119533 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:27:52.119554 systemd[1]: Detected virtualization google. Nov 8 00:27:52.119575 systemd[1]: Detected architecture x86-64. Nov 8 00:27:52.119597 systemd[1]: Running in initrd. Nov 8 00:27:52.119615 systemd[1]: No hostname configured, using default hostname. Nov 8 00:27:52.119632 systemd[1]: Hostname set to . Nov 8 00:27:52.119651 systemd[1]: Initializing machine ID from random generator. Nov 8 00:27:52.119671 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:27:52.119691 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:27:52.119712 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:27:52.119733 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:27:52.119757 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:27:52.119778 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:27:52.119799 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:27:52.119823 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:27:52.119841 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:27:52.119863 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:27:52.119884 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:27:52.119908 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:27:52.119929 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:27:52.119970 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:27:52.119995 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:27:52.120017 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:27:52.120038 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:27:52.120063 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:27:52.120085 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:27:52.120213 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:27:52.120250 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:27:52.120272 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:27:52.120296 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:27:52.120319 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:27:52.120340 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:27:52.120364 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:27:52.120399 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:27:52.120423 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:27:52.120446 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:27:52.120469 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:27:52.120493 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:27:52.120564 systemd-journald[183]: Collecting audit messages is disabled. Nov 8 00:27:52.120620 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:27:52.120642 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:27:52.120668 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:27:52.120697 systemd-journald[183]: Journal started Nov 8 00:27:52.120740 systemd-journald[183]: Runtime Journal (/run/log/journal/3f0b01f012c0436aac9dc23dc1b014c6) is 8.0M, max 148.7M, 140.7M free. Nov 8 00:27:52.100912 systemd-modules-load[184]: Inserted module 'overlay' Nov 8 00:27:52.127285 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:27:52.137207 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:27:52.150441 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:27:52.158603 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:27:52.171564 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:27:52.160609 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:27:52.169300 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:27:52.181233 kernel: Bridge firewalling registered Nov 8 00:27:52.180186 systemd-modules-load[184]: Inserted module 'br_netfilter' Nov 8 00:27:52.191627 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:27:52.192401 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:27:52.195833 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:27:52.217600 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:27:52.218319 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:27:52.229516 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:27:52.232958 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:27:52.247388 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:27:52.272406 dracut-cmdline[219]: dracut-dracut-053 Nov 8 00:27:52.277057 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:27:52.289224 systemd-resolved[212]: Positive Trust Anchors: Nov 8 00:27:52.289737 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:27:52.289957 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:27:52.296541 systemd-resolved[212]: Defaulting to hostname 'linux'. Nov 8 00:27:52.298273 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:27:52.324434 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:27:52.380188 kernel: SCSI subsystem initialized Nov 8 00:27:52.392179 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:27:52.404179 kernel: iscsi: registered transport (tcp) Nov 8 00:27:52.430182 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:27:52.430275 kernel: QLogic iSCSI HBA Driver Nov 8 00:27:52.483629 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:27:52.494355 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:27:52.534179 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:27:52.534272 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:27:52.534301 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:27:52.582211 kernel: raid6: avx2x4 gen() 18011 MB/s Nov 8 00:27:52.599187 kernel: raid6: avx2x2 gen() 18093 MB/s Nov 8 00:27:52.616633 kernel: raid6: avx2x1 gen() 13782 MB/s Nov 8 00:27:52.616679 kernel: raid6: using algorithm avx2x2 gen() 18093 MB/s Nov 8 00:27:52.634765 kernel: raid6: .... xor() 18128 MB/s, rmw enabled Nov 8 00:27:52.634853 kernel: raid6: using avx2x2 recovery algorithm Nov 8 00:27:52.658185 kernel: xor: automatically using best checksumming function avx Nov 8 00:27:52.838191 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:27:52.853124 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:27:52.860427 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:27:52.894495 systemd-udevd[401]: Using default interface naming scheme 'v255'. Nov 8 00:27:52.901654 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:27:52.911364 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:27:52.943086 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Nov 8 00:27:52.982290 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:27:52.997417 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:27:53.080366 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:27:53.095452 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:27:53.129030 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:27:53.131882 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:27:53.142267 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:27:53.146281 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:27:53.153350 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:27:53.191721 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:27:53.198166 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:27:53.220538 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:27:53.222186 kernel: AES CTR mode by8 optimization enabled Nov 8 00:27:53.293220 kernel: scsi host0: Virtio SCSI HBA Nov 8 00:27:53.298888 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:27:53.319933 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Nov 8 00:27:53.299033 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:27:53.318299 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:27:53.322076 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:27:53.322396 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:27:53.327085 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:27:53.342847 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:27:53.381520 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:27:53.390276 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:27:53.395855 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Nov 8 00:27:53.396237 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Nov 8 00:27:53.396474 kernel: sd 0:0:1:0: [sda] Write Protect is off Nov 8 00:27:53.396712 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Nov 8 00:27:53.396943 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 8 00:27:53.406659 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:27:53.406737 kernel: GPT:17805311 != 33554431 Nov 8 00:27:53.406763 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:27:53.407503 kernel: GPT:17805311 != 33554431 Nov 8 00:27:53.409530 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:27:53.409579 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:27:53.410750 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Nov 8 00:27:53.430492 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:27:53.468187 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (447) Nov 8 00:27:53.475188 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (455) Nov 8 00:27:53.493749 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Nov 8 00:27:53.506832 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Nov 8 00:27:53.513336 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Nov 8 00:27:53.513490 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Nov 8 00:27:53.528125 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Nov 8 00:27:53.532364 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:27:53.557109 disk-uuid[550]: Primary Header is updated. Nov 8 00:27:53.557109 disk-uuid[550]: Secondary Entries is updated. Nov 8 00:27:53.557109 disk-uuid[550]: Secondary Header is updated. Nov 8 00:27:53.572190 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:27:53.578189 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:27:53.591184 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:27:54.602195 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:27:54.602313 disk-uuid[551]: The operation has completed successfully. Nov 8 00:27:54.679937 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:27:54.680089 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:27:54.709360 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:27:54.739780 sh[568]: Success Nov 8 00:27:54.770172 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 8 00:27:54.869827 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:27:54.876761 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:27:54.912356 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:27:54.954617 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:27:54.954707 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:27:54.954734 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:27:54.970891 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:27:54.970967 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:27:55.004223 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 8 00:27:55.010084 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:27:55.020113 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:27:55.026382 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:27:55.066552 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:27:55.118876 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:27:55.118923 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:27:55.118948 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:27:55.118973 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:27:55.118998 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:27:55.138510 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:27:55.155358 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:27:55.165870 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:27:55.190400 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:27:55.239657 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:27:55.259448 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:27:55.309227 systemd-networkd[750]: lo: Link UP Nov 8 00:27:55.309240 systemd-networkd[750]: lo: Gained carrier Nov 8 00:27:55.311534 systemd-networkd[750]: Enumeration completed Nov 8 00:27:55.311681 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:27:55.312414 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:27:55.312421 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:27:55.314741 systemd-networkd[750]: eth0: Link UP Nov 8 00:27:55.314748 systemd-networkd[750]: eth0: Gained carrier Nov 8 00:27:55.314761 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:27:55.400568 ignition[697]: Ignition 2.19.0 Nov 8 00:27:55.323360 systemd[1]: Reached target network.target - Network. Nov 8 00:27:55.400586 ignition[697]: Stage: fetch-offline Nov 8 00:27:55.336288 systemd-networkd[750]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562' Nov 8 00:27:55.400673 ignition[697]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:27:55.336306 systemd-networkd[750]: eth0: DHCPv4 address 10.128.0.61/32, gateway 10.128.0.1 acquired from 169.254.169.254 Nov 8 00:27:55.400690 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 8 00:27:55.402964 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:27:55.400822 ignition[697]: parsed url from cmdline: "" Nov 8 00:27:55.424376 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:27:55.400829 ignition[697]: no config URL provided Nov 8 00:27:55.465026 unknown[760]: fetched base config from "system" Nov 8 00:27:55.400838 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:27:55.465039 unknown[760]: fetched base config from "system" Nov 8 00:27:55.400853 ignition[697]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:27:55.465050 unknown[760]: fetched user config from "gcp" Nov 8 00:27:55.400865 ignition[697]: failed to fetch config: resource requires networking Nov 8 00:27:55.467592 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:27:55.401321 ignition[697]: Ignition finished successfully Nov 8 00:27:55.504387 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:27:55.455798 ignition[760]: Ignition 2.19.0 Nov 8 00:27:55.547022 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:27:55.455808 ignition[760]: Stage: fetch Nov 8 00:27:55.565623 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:27:55.456015 ignition[760]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:27:55.621760 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:27:55.456028 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 8 00:27:55.642748 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:27:55.456182 ignition[760]: parsed url from cmdline: "" Nov 8 00:27:55.660329 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:27:55.456189 ignition[760]: no config URL provided Nov 8 00:27:55.677358 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:27:55.456198 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:27:55.694346 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:27:55.456208 ignition[760]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:27:55.708338 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:27:55.456232 ignition[760]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Nov 8 00:27:55.730344 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:27:55.459287 ignition[760]: GET result: OK Nov 8 00:27:55.459454 ignition[760]: parsing config with SHA512: 2c68d25b8e13638abe8e05d269833f05d635f59bbe7ed58313f6f028329c771dc4207671d2411d4890b9adb369759a7237f4367ed19591a52824b50bbe1ef634 Nov 8 00:27:55.465464 ignition[760]: fetch: fetch complete Nov 8 00:27:55.465471 ignition[760]: fetch: fetch passed Nov 8 00:27:55.465526 ignition[760]: Ignition finished successfully Nov 8 00:27:55.544443 ignition[766]: Ignition 2.19.0 Nov 8 00:27:55.544453 ignition[766]: Stage: kargs Nov 8 00:27:55.544652 ignition[766]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:27:55.544664 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 8 00:27:55.545801 ignition[766]: kargs: kargs passed Nov 8 00:27:55.545874 ignition[766]: Ignition finished successfully Nov 8 00:27:55.598264 ignition[771]: Ignition 2.19.0 Nov 8 00:27:55.598277 ignition[771]: Stage: disks Nov 8 00:27:55.598481 ignition[771]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:27:55.598494 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 8 00:27:55.599608 ignition[771]: disks: disks passed Nov 8 00:27:55.599671 ignition[771]: Ignition finished successfully Nov 8 00:27:55.797876 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 8 00:27:55.944298 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:27:55.949284 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:27:56.107681 kernel: EXT4-fs (sda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:27:56.107571 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:27:56.117123 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:27:56.143294 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:27:56.165317 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:27:56.166185 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 8 00:27:56.227340 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (788) Nov 8 00:27:56.227396 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:27:56.227441 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:27:56.227467 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:27:56.166271 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:27:56.269440 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:27:56.269489 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:27:56.166311 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:27:56.253097 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:27:56.288291 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:27:56.293410 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:27:56.442588 initrd-setup-root[812]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:27:56.452373 initrd-setup-root[819]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:27:56.462302 initrd-setup-root[826]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:27:56.474300 initrd-setup-root[833]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:27:56.614867 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:27:56.619440 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:27:56.647940 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:27:56.670335 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:27:56.680617 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:27:56.716154 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:27:56.725348 ignition[900]: INFO : Ignition 2.19.0 Nov 8 00:27:56.725348 ignition[900]: INFO : Stage: mount Nov 8 00:27:56.725348 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:27:56.725348 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 8 00:27:56.725348 ignition[900]: INFO : mount: mount passed Nov 8 00:27:56.725348 ignition[900]: INFO : Ignition finished successfully Nov 8 00:27:56.735928 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:27:56.747305 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:27:56.998393 systemd-networkd[750]: eth0: Gained IPv6LL Nov 8 00:27:57.114460 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:27:57.160248 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (913) Nov 8 00:27:57.178508 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:27:57.178603 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:27:57.178645 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:27:57.201622 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:27:57.201709 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:27:57.204899 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:27:57.238864 ignition[930]: INFO : Ignition 2.19.0 Nov 8 00:27:57.238864 ignition[930]: INFO : Stage: files Nov 8 00:27:57.253320 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:27:57.253320 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 8 00:27:57.253320 ignition[930]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:27:57.253320 ignition[930]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:27:57.253320 ignition[930]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:27:57.310336 ignition[930]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:27:57.310336 ignition[930]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:27:57.310336 ignition[930]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:27:57.310336 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:27:57.310336 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 8 00:27:57.254780 unknown[930]: wrote ssh authorized keys file for user: core Nov 8 00:27:57.388346 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:27:57.757839 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:27:57.774321 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:27:57.774321 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:27:57.774321 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:27:57.774321 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:27:57.774321 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:27:57.774321 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:27:57.774321 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:27:57.774321 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:27:57.774321 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:27:57.774321 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:27:57.774321 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:27:57.774321 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:27:57.774321 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:27:57.774321 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 8 00:27:58.412830 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 00:27:58.911071 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:27:58.911071 ignition[930]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 8 00:27:58.948346 ignition[930]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:27:58.948346 ignition[930]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:27:58.948346 ignition[930]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 8 00:27:58.948346 ignition[930]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:27:58.948346 ignition[930]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:27:58.948346 ignition[930]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:27:58.948346 ignition[930]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:27:58.948346 ignition[930]: INFO : files: files passed Nov 8 00:27:58.948346 ignition[930]: INFO : Ignition finished successfully Nov 8 00:27:58.915388 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:27:58.935393 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:27:58.954379 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:27:58.989972 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:27:59.156309 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:27:59.156309 initrd-setup-root-after-ignition[958]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:27:58.990187 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:27:59.205476 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:27:59.055005 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:27:59.059700 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:27:59.090395 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:27:59.169023 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:27:59.169206 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:27:59.181520 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:27:59.205326 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:27:59.223436 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:27:59.230360 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:27:59.282015 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:27:59.306364 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:27:59.341930 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:27:59.353488 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:27:59.378622 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:27:59.398573 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:27:59.398780 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:27:59.431637 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:27:59.451527 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:27:59.469580 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:27:59.487602 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:27:59.507513 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:27:59.529514 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:27:59.549626 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:27:59.568555 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:27:59.588635 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:27:59.608534 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:27:59.627585 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:27:59.627846 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:27:59.658631 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:27:59.678516 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:27:59.699593 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:27:59.699773 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:27:59.717456 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:27:59.717698 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:27:59.828458 ignition[983]: INFO : Ignition 2.19.0 Nov 8 00:27:59.828458 ignition[983]: INFO : Stage: umount Nov 8 00:27:59.828458 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:27:59.828458 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 8 00:27:59.828458 ignition[983]: INFO : umount: umount passed Nov 8 00:27:59.828458 ignition[983]: INFO : Ignition finished successfully Nov 8 00:27:59.745566 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:27:59.745839 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:27:59.766596 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:27:59.766796 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:27:59.794435 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:27:59.842455 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:27:59.850510 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:27:59.850748 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:27:59.922678 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:27:59.922960 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:27:59.954530 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:27:59.955724 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:27:59.955852 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:27:59.960987 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:27:59.961100 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:27:59.979836 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:27:59.979980 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:27:59.996722 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:27:59.996787 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:28:00.014638 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:28:00.014720 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:28:00.031694 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:28:00.031777 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:28:00.048588 systemd[1]: Stopped target network.target - Network. Nov 8 00:28:00.064567 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:28:00.064655 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:28:00.097562 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:28:00.105575 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:28:00.109251 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:28:00.132455 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:28:00.140590 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:28:00.158572 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:28:00.158635 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:28:00.173587 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:28:00.173652 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:28:00.190574 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:28:00.190651 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:28:00.207601 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:28:00.207675 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:28:00.241570 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:28:00.241650 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:28:00.250827 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:28:00.255256 systemd-networkd[750]: eth0: DHCPv6 lease lost Nov 8 00:28:00.277598 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:28:00.297822 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:28:00.297962 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:28:00.319509 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:28:00.319674 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:28:00.337349 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:28:00.337406 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:28:00.351306 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:28:00.385546 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:28:00.385633 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:28:00.400642 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:28:00.400719 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:28:00.428556 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:28:00.428636 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:28:00.446492 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:28:00.446585 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:28:00.467670 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:28:00.486966 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:28:00.487155 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:28:00.501685 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:28:00.501756 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:28:00.895353 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Nov 8 00:28:00.522658 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:28:00.522744 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:28:00.539665 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:28:00.539757 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:28:00.577745 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:28:00.577852 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:28:00.602646 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:28:00.602753 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:28:00.646460 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:28:00.650504 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:28:00.650588 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:28:00.688586 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 8 00:28:00.688660 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:28:00.719561 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:28:00.719658 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:28:00.741506 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:28:00.741592 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:28:00.750049 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:28:00.750213 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:28:00.767859 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:28:00.767984 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:28:00.785725 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:28:00.808411 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:28:00.854617 systemd[1]: Switching root. Nov 8 00:28:01.134289 systemd-journald[183]: Journal stopped Nov 8 00:27:52.102546 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:27:52.102595 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:27:52.102614 kernel: BIOS-provided physical RAM map: Nov 8 00:27:52.102629 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Nov 8 00:27:52.102644 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Nov 8 00:27:52.102657 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Nov 8 00:27:52.102674 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Nov 8 00:27:52.102693 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Nov 8 00:27:52.102708 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Nov 8 00:27:52.102722 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Nov 8 00:27:52.102736 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Nov 8 00:27:52.102748 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Nov 8 00:27:52.102761 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Nov 8 00:27:52.102775 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Nov 8 00:27:52.102797 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Nov 8 00:27:52.102814 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Nov 8 00:27:52.102830 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Nov 8 00:27:52.102844 kernel: NX (Execute Disable) protection: active Nov 8 00:27:52.102858 kernel: APIC: Static calls initialized Nov 8 00:27:52.102873 kernel: efi: EFI v2.7 by EDK II Nov 8 00:27:52.102887 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd323018 Nov 8 00:27:52.102904 kernel: SMBIOS 2.4 present. Nov 8 00:27:52.102919 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025 Nov 8 00:27:52.102933 kernel: Hypervisor detected: KVM Nov 8 00:27:52.102954 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 8 00:27:52.102970 kernel: kvm-clock: using sched offset of 12840160305 cycles Nov 8 00:27:52.102988 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 8 00:27:52.103006 kernel: tsc: Detected 2299.998 MHz processor Nov 8 00:27:52.103023 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:27:52.103041 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:27:52.103058 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Nov 8 00:27:52.103077 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Nov 8 00:27:52.103093 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:27:52.103121 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Nov 8 00:27:52.103136 kernel: Using GB pages for direct mapping Nov 8 00:27:52.103230 kernel: Secure boot disabled Nov 8 00:27:52.103244 kernel: ACPI: Early table checksum verification disabled Nov 8 00:27:52.103258 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Nov 8 00:27:52.103273 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Nov 8 00:27:52.103288 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Nov 8 00:27:52.103311 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Nov 8 00:27:52.103330 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Nov 8 00:27:52.103363 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Nov 8 00:27:52.103380 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Nov 8 00:27:52.103396 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Nov 8 00:27:52.103412 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Nov 8 00:27:52.103429 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Nov 8 00:27:52.103450 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Nov 8 00:27:52.103466 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Nov 8 00:27:52.103482 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Nov 8 00:27:52.103499 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Nov 8 00:27:52.103516 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Nov 8 00:27:52.103532 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Nov 8 00:27:52.103549 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Nov 8 00:27:52.103564 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Nov 8 00:27:52.103580 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Nov 8 00:27:52.103601 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Nov 8 00:27:52.103618 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 8 00:27:52.103636 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 8 00:27:52.103654 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 8 00:27:52.103671 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Nov 8 00:27:52.103689 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Nov 8 00:27:52.103707 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Nov 8 00:27:52.103724 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Nov 8 00:27:52.103741 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Nov 8 00:27:52.103764 kernel: Zone ranges: Nov 8 00:27:52.103781 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:27:52.103798 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 8 00:27:52.103815 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Nov 8 00:27:52.103833 kernel: Movable zone start for each node Nov 8 00:27:52.103850 kernel: Early memory node ranges Nov 8 00:27:52.103867 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Nov 8 00:27:52.103885 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Nov 8 00:27:52.103902 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Nov 8 00:27:52.103924 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Nov 8 00:27:52.103941 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Nov 8 00:27:52.103957 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Nov 8 00:27:52.103974 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:27:52.103992 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Nov 8 00:27:52.104009 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Nov 8 00:27:52.104027 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Nov 8 00:27:52.104044 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Nov 8 00:27:52.104062 kernel: ACPI: PM-Timer IO Port: 0xb008 Nov 8 00:27:52.104083 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 8 00:27:52.104100 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:27:52.104117 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 8 00:27:52.104135 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:27:52.104179 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 8 00:27:52.104196 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 8 00:27:52.104213 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:27:52.104231 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 8 00:27:52.104248 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Nov 8 00:27:52.104270 kernel: Booting paravirtualized kernel on KVM Nov 8 00:27:52.104289 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:27:52.104306 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 8 00:27:52.104323 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 8 00:27:52.104340 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 8 00:27:52.104364 kernel: pcpu-alloc: [0] 0 1 Nov 8 00:27:52.104380 kernel: kvm-guest: PV spinlocks enabled Nov 8 00:27:52.104397 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 8 00:27:52.104415 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:27:52.104448 kernel: random: crng init done Nov 8 00:27:52.104478 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 8 00:27:52.104494 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 8 00:27:52.104511 kernel: Fallback order for Node 0: 0 Nov 8 00:27:52.104528 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Nov 8 00:27:52.104544 kernel: Policy zone: Normal Nov 8 00:27:52.104562 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:27:52.104579 kernel: software IO TLB: area num 2. Nov 8 00:27:52.104602 kernel: Memory: 7513384K/7860584K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 346940K reserved, 0K cma-reserved) Nov 8 00:27:52.104620 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 8 00:27:52.104638 kernel: Kernel/User page tables isolation: enabled Nov 8 00:27:52.104656 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:27:52.104674 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:27:52.104691 kernel: Dynamic Preempt: voluntary Nov 8 00:27:52.104707 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:27:52.104726 kernel: rcu: RCU event tracing is enabled. Nov 8 00:27:52.104745 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 8 00:27:52.104782 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:27:52.104802 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:27:52.104821 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:27:52.104844 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:27:52.104862 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 8 00:27:52.104881 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 8 00:27:52.104900 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:27:52.104919 kernel: Console: colour dummy device 80x25 Nov 8 00:27:52.104943 kernel: printk: console [ttyS0] enabled Nov 8 00:27:52.104962 kernel: ACPI: Core revision 20230628 Nov 8 00:27:52.104981 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:27:52.105000 kernel: x2apic enabled Nov 8 00:27:52.105019 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:27:52.105038 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Nov 8 00:27:52.105058 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Nov 8 00:27:52.105077 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Nov 8 00:27:52.105096 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Nov 8 00:27:52.105120 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Nov 8 00:27:52.105139 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:27:52.105191 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 8 00:27:52.105207 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 8 00:27:52.105223 kernel: Spectre V2 : Mitigation: IBRS Nov 8 00:27:52.105239 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:27:52.105256 kernel: RETBleed: Mitigation: IBRS Nov 8 00:27:52.105275 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 00:27:52.105293 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Nov 8 00:27:52.105318 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 00:27:52.105336 kernel: MDS: Mitigation: Clear CPU buffers Nov 8 00:27:52.105364 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:27:52.105382 kernel: active return thunk: its_return_thunk Nov 8 00:27:52.105401 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 8 00:27:52.105420 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:27:52.105440 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:27:52.105460 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:27:52.105476 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:27:52.105499 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 8 00:27:52.105518 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:27:52.105537 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:27:52.105555 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:27:52.105574 kernel: landlock: Up and running. Nov 8 00:27:52.105593 kernel: SELinux: Initializing. Nov 8 00:27:52.105611 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 00:27:52.105631 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 00:27:52.105651 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Nov 8 00:27:52.105675 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:27:52.105695 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:27:52.105715 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:27:52.105733 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Nov 8 00:27:52.105752 kernel: signal: max sigframe size: 1776 Nov 8 00:27:52.105770 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:27:52.105790 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:27:52.105808 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 8 00:27:52.105827 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:27:52.105848 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:27:52.105866 kernel: .... node #0, CPUs: #1 Nov 8 00:27:52.105884 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Nov 8 00:27:52.105905 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 8 00:27:52.105924 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:27:52.105943 kernel: smpboot: Max logical packages: 1 Nov 8 00:27:52.105961 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Nov 8 00:27:52.105979 kernel: devtmpfs: initialized Nov 8 00:27:52.106002 kernel: x86/mm: Memory block size: 128MB Nov 8 00:27:52.106021 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Nov 8 00:27:52.106040 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:27:52.106060 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 8 00:27:52.106079 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:27:52.106098 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:27:52.106116 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:27:52.106135 kernel: audit: type=2000 audit(1762561670.345:1): state=initialized audit_enabled=0 res=1 Nov 8 00:27:52.106238 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:27:52.106269 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:27:52.106289 kernel: cpuidle: using governor menu Nov 8 00:27:52.106308 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:27:52.106326 kernel: dca service started, version 1.12.1 Nov 8 00:27:52.106343 kernel: PCI: Using configuration type 1 for base access Nov 8 00:27:52.106370 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:27:52.106388 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:27:52.106407 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:27:52.106425 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:27:52.106448 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:27:52.106467 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:27:52.106485 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:27:52.106504 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:27:52.106523 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Nov 8 00:27:52.106541 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:27:52.106560 kernel: ACPI: Interpreter enabled Nov 8 00:27:52.106578 kernel: ACPI: PM: (supports S0 S3 S5) Nov 8 00:27:52.106597 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:27:52.106619 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:27:52.106638 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 8 00:27:52.106657 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Nov 8 00:27:52.106675 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 8 00:27:52.106951 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:27:52.107159 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 8 00:27:52.107367 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 8 00:27:52.107399 kernel: PCI host bridge to bus 0000:00 Nov 8 00:27:52.107594 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:27:52.107777 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 8 00:27:52.107954 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:27:52.108128 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Nov 8 00:27:52.108322 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 8 00:27:52.108545 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 8 00:27:52.108758 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Nov 8 00:27:52.108964 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Nov 8 00:27:52.109188 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Nov 8 00:27:52.109417 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Nov 8 00:27:52.109623 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Nov 8 00:27:52.109811 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Nov 8 00:27:52.110030 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 8 00:27:52.110262 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Nov 8 00:27:52.112399 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Nov 8 00:27:52.112632 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Nov 8 00:27:52.112833 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Nov 8 00:27:52.113034 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Nov 8 00:27:52.113057 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 8 00:27:52.113083 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 8 00:27:52.113100 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:27:52.113118 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 8 00:27:52.113136 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 8 00:27:52.113275 kernel: iommu: Default domain type: Translated Nov 8 00:27:52.113295 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:27:52.113317 kernel: efivars: Registered efivars operations Nov 8 00:27:52.113335 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:27:52.113360 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:27:52.113384 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Nov 8 00:27:52.113402 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Nov 8 00:27:52.113419 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Nov 8 00:27:52.113436 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Nov 8 00:27:52.113453 kernel: vgaarb: loaded Nov 8 00:27:52.113471 kernel: clocksource: Switched to clocksource kvm-clock Nov 8 00:27:52.113488 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:27:52.113515 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:27:52.113533 kernel: pnp: PnP ACPI init Nov 8 00:27:52.113555 kernel: pnp: PnP ACPI: found 7 devices Nov 8 00:27:52.113573 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:27:52.113592 kernel: NET: Registered PF_INET protocol family Nov 8 00:27:52.113609 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 8 00:27:52.113628 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 8 00:27:52.113646 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:27:52.113665 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:27:52.113683 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 8 00:27:52.113701 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 8 00:27:52.113724 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 8 00:27:52.113743 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 8 00:27:52.113761 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:27:52.113779 kernel: NET: Registered PF_XDP protocol family Nov 8 00:27:52.113980 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 8 00:27:52.114177 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 8 00:27:52.114363 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 8 00:27:52.114549 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Nov 8 00:27:52.114762 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 8 00:27:52.114789 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:27:52.114810 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 8 00:27:52.114830 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Nov 8 00:27:52.114849 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 8 00:27:52.114869 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Nov 8 00:27:52.114889 kernel: clocksource: Switched to clocksource tsc Nov 8 00:27:52.114908 kernel: Initialise system trusted keyrings Nov 8 00:27:52.114932 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 8 00:27:52.114952 kernel: Key type asymmetric registered Nov 8 00:27:52.114972 kernel: Asymmetric key parser 'x509' registered Nov 8 00:27:52.114991 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:27:52.115011 kernel: io scheduler mq-deadline registered Nov 8 00:27:52.115030 kernel: io scheduler kyber registered Nov 8 00:27:52.115050 kernel: io scheduler bfq registered Nov 8 00:27:52.115070 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:27:52.115090 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 8 00:27:52.117354 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Nov 8 00:27:52.117390 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Nov 8 00:27:52.117588 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Nov 8 00:27:52.117615 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 8 00:27:52.117802 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Nov 8 00:27:52.117825 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:27:52.117844 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:27:52.117863 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 8 00:27:52.117882 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Nov 8 00:27:52.117908 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Nov 8 00:27:52.118101 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Nov 8 00:27:52.118127 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 8 00:27:52.118179 kernel: i8042: Warning: Keylock active Nov 8 00:27:52.118198 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:27:52.118217 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:27:52.118423 kernel: rtc_cmos 00:00: RTC can wake from S4 Nov 8 00:27:52.118600 kernel: rtc_cmos 00:00: registered as rtc0 Nov 8 00:27:52.118772 kernel: rtc_cmos 00:00: setting system clock to 2025-11-08T00:27:51 UTC (1762561671) Nov 8 00:27:52.118941 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Nov 8 00:27:52.118964 kernel: intel_pstate: CPU model not supported Nov 8 00:27:52.118982 kernel: pstore: Using crash dump compression: deflate Nov 8 00:27:52.119000 kernel: pstore: Registered efi_pstore as persistent store backend Nov 8 00:27:52.119019 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:27:52.119038 kernel: Segment Routing with IPv6 Nov 8 00:27:52.119054 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:27:52.119078 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:27:52.119094 kernel: Key type dns_resolver registered Nov 8 00:27:52.119111 kernel: IPI shorthand broadcast: enabled Nov 8 00:27:52.119128 kernel: sched_clock: Marking stable (852009577, 147239521)->(1054754586, -55505488) Nov 8 00:27:52.119184 kernel: registered taskstats version 1 Nov 8 00:27:52.119204 kernel: Loading compiled-in X.509 certificates Nov 8 00:27:52.119223 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:27:52.119244 kernel: Key type .fscrypt registered Nov 8 00:27:52.119261 kernel: Key type fscrypt-provisioning registered Nov 8 00:27:52.119282 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:27:52.119300 kernel: ima: No architecture policies found Nov 8 00:27:52.119317 kernel: clk: Disabling unused clocks Nov 8 00:27:52.119335 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:27:52.119362 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:27:52.119380 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:27:52.119398 kernel: Run /init as init process Nov 8 00:27:52.119416 kernel: with arguments: Nov 8 00:27:52.119434 kernel: /init Nov 8 00:27:52.119457 kernel: with environment: Nov 8 00:27:52.119474 kernel: HOME=/ Nov 8 00:27:52.119491 kernel: TERM=linux Nov 8 00:27:52.119510 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Nov 8 00:27:52.119533 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:27:52.119554 systemd[1]: Detected virtualization google. Nov 8 00:27:52.119575 systemd[1]: Detected architecture x86-64. Nov 8 00:27:52.119597 systemd[1]: Running in initrd. Nov 8 00:27:52.119615 systemd[1]: No hostname configured, using default hostname. Nov 8 00:27:52.119632 systemd[1]: Hostname set to . Nov 8 00:27:52.119651 systemd[1]: Initializing machine ID from random generator. Nov 8 00:27:52.119671 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:27:52.119691 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:27:52.119712 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:27:52.119733 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:27:52.119757 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:27:52.119778 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:27:52.119799 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:27:52.119823 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:27:52.119841 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:27:52.119863 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:27:52.119884 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:27:52.119908 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:27:52.119929 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:27:52.119970 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:27:52.119995 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:27:52.120017 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:27:52.120038 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:27:52.120063 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:27:52.120085 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:27:52.120213 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:27:52.120250 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:27:52.120272 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:27:52.120296 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:27:52.120319 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:27:52.120340 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:27:52.120364 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:27:52.120399 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:27:52.120423 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:27:52.120446 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:27:52.120469 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:27:52.120493 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:27:52.120564 systemd-journald[183]: Collecting audit messages is disabled. Nov 8 00:27:52.120620 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:27:52.120642 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:27:52.120668 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:27:52.120697 systemd-journald[183]: Journal started Nov 8 00:27:52.120740 systemd-journald[183]: Runtime Journal (/run/log/journal/3f0b01f012c0436aac9dc23dc1b014c6) is 8.0M, max 148.7M, 140.7M free. Nov 8 00:27:52.100912 systemd-modules-load[184]: Inserted module 'overlay' Nov 8 00:27:52.127285 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:27:52.137207 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:27:52.150441 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:27:52.158603 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:27:52.171564 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:27:52.160609 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:27:52.169300 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:27:52.181233 kernel: Bridge firewalling registered Nov 8 00:27:52.180186 systemd-modules-load[184]: Inserted module 'br_netfilter' Nov 8 00:27:52.191627 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:27:52.192401 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:27:52.195833 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:27:52.217600 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:27:52.218319 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:27:52.229516 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:27:52.232958 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:27:52.247388 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:27:52.272406 dracut-cmdline[219]: dracut-dracut-053 Nov 8 00:27:52.277057 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:27:52.289224 systemd-resolved[212]: Positive Trust Anchors: Nov 8 00:27:52.289737 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:27:52.289957 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:27:52.296541 systemd-resolved[212]: Defaulting to hostname 'linux'. Nov 8 00:27:52.298273 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:27:52.324434 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:27:52.380188 kernel: SCSI subsystem initialized Nov 8 00:27:52.392179 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:27:52.404179 kernel: iscsi: registered transport (tcp) Nov 8 00:27:52.430182 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:27:52.430275 kernel: QLogic iSCSI HBA Driver Nov 8 00:27:52.483629 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:27:52.494355 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:27:52.534179 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:27:52.534272 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:27:52.534301 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:27:52.582211 kernel: raid6: avx2x4 gen() 18011 MB/s Nov 8 00:27:52.599187 kernel: raid6: avx2x2 gen() 18093 MB/s Nov 8 00:27:52.616633 kernel: raid6: avx2x1 gen() 13782 MB/s Nov 8 00:27:52.616679 kernel: raid6: using algorithm avx2x2 gen() 18093 MB/s Nov 8 00:27:52.634765 kernel: raid6: .... xor() 18128 MB/s, rmw enabled Nov 8 00:27:52.634853 kernel: raid6: using avx2x2 recovery algorithm Nov 8 00:27:52.658185 kernel: xor: automatically using best checksumming function avx Nov 8 00:27:52.838191 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:27:52.853124 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:27:52.860427 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:27:52.894495 systemd-udevd[401]: Using default interface naming scheme 'v255'. Nov 8 00:27:52.901654 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:27:52.911364 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:27:52.943086 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Nov 8 00:27:52.982290 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:27:52.997417 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:27:53.080366 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:27:53.095452 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:27:53.129030 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:27:53.131882 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:27:53.142267 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:27:53.146281 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:27:53.153350 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:27:53.191721 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:27:53.198166 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:27:53.220538 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:27:53.222186 kernel: AES CTR mode by8 optimization enabled Nov 8 00:27:53.293220 kernel: scsi host0: Virtio SCSI HBA Nov 8 00:27:53.298888 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:27:53.319933 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Nov 8 00:27:53.299033 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:27:53.318299 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:27:53.322076 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:27:53.322396 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:27:53.327085 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:27:53.342847 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:27:53.381520 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:27:53.390276 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:27:53.395855 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Nov 8 00:27:53.396237 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Nov 8 00:27:53.396474 kernel: sd 0:0:1:0: [sda] Write Protect is off Nov 8 00:27:53.396712 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Nov 8 00:27:53.396943 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 8 00:27:53.406659 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:27:53.406737 kernel: GPT:17805311 != 33554431 Nov 8 00:27:53.406763 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:27:53.407503 kernel: GPT:17805311 != 33554431 Nov 8 00:27:53.409530 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:27:53.409579 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:27:53.410750 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Nov 8 00:27:53.430492 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:27:53.468187 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (447) Nov 8 00:27:53.475188 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (455) Nov 8 00:27:53.493749 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Nov 8 00:27:53.506832 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Nov 8 00:27:53.513336 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Nov 8 00:27:53.513490 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Nov 8 00:27:53.528125 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Nov 8 00:27:53.532364 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:27:53.557109 disk-uuid[550]: Primary Header is updated. Nov 8 00:27:53.557109 disk-uuid[550]: Secondary Entries is updated. Nov 8 00:27:53.557109 disk-uuid[550]: Secondary Header is updated. Nov 8 00:27:53.572190 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:27:53.578189 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:27:53.591184 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:27:54.602195 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:27:54.602313 disk-uuid[551]: The operation has completed successfully. Nov 8 00:27:54.679937 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:27:54.680089 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:27:54.709360 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:27:54.739780 sh[568]: Success Nov 8 00:27:54.770172 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 8 00:27:54.869827 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:27:54.876761 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:27:54.912356 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:27:54.954617 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:27:54.954707 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:27:54.954734 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:27:54.970891 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:27:54.970967 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:27:55.004223 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 8 00:27:55.010084 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:27:55.020113 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:27:55.026382 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:27:55.066552 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:27:55.118876 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:27:55.118923 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:27:55.118948 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:27:55.118973 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:27:55.118998 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:27:55.138510 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:27:55.155358 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:27:55.165870 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:27:55.190400 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:27:55.239657 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:27:55.259448 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:27:55.309227 systemd-networkd[750]: lo: Link UP Nov 8 00:27:55.309240 systemd-networkd[750]: lo: Gained carrier Nov 8 00:27:55.311534 systemd-networkd[750]: Enumeration completed Nov 8 00:27:55.311681 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:27:55.312414 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:27:55.312421 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:27:55.314741 systemd-networkd[750]: eth0: Link UP Nov 8 00:27:55.314748 systemd-networkd[750]: eth0: Gained carrier Nov 8 00:27:55.314761 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:27:55.400568 ignition[697]: Ignition 2.19.0 Nov 8 00:27:55.323360 systemd[1]: Reached target network.target - Network. Nov 8 00:27:55.400586 ignition[697]: Stage: fetch-offline Nov 8 00:27:55.336288 systemd-networkd[750]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562' Nov 8 00:27:55.400673 ignition[697]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:27:55.336306 systemd-networkd[750]: eth0: DHCPv4 address 10.128.0.61/32, gateway 10.128.0.1 acquired from 169.254.169.254 Nov 8 00:27:55.400690 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 8 00:27:55.402964 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:27:55.400822 ignition[697]: parsed url from cmdline: "" Nov 8 00:27:55.424376 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:27:55.400829 ignition[697]: no config URL provided Nov 8 00:27:55.465026 unknown[760]: fetched base config from "system" Nov 8 00:27:55.400838 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:27:55.465039 unknown[760]: fetched base config from "system" Nov 8 00:27:55.400853 ignition[697]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:27:55.465050 unknown[760]: fetched user config from "gcp" Nov 8 00:27:55.400865 ignition[697]: failed to fetch config: resource requires networking Nov 8 00:27:55.467592 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:27:55.401321 ignition[697]: Ignition finished successfully Nov 8 00:27:55.504387 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:27:55.455798 ignition[760]: Ignition 2.19.0 Nov 8 00:27:55.547022 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:27:55.455808 ignition[760]: Stage: fetch Nov 8 00:27:55.565623 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:27:55.456015 ignition[760]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:27:55.621760 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:27:55.456028 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 8 00:27:55.642748 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:27:55.456182 ignition[760]: parsed url from cmdline: "" Nov 8 00:27:55.660329 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:27:55.456189 ignition[760]: no config URL provided Nov 8 00:27:55.677358 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:27:55.456198 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:27:55.694346 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:27:55.456208 ignition[760]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:27:55.708338 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:27:55.456232 ignition[760]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Nov 8 00:27:55.730344 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:27:55.459287 ignition[760]: GET result: OK Nov 8 00:27:55.459454 ignition[760]: parsing config with SHA512: 2c68d25b8e13638abe8e05d269833f05d635f59bbe7ed58313f6f028329c771dc4207671d2411d4890b9adb369759a7237f4367ed19591a52824b50bbe1ef634 Nov 8 00:27:55.465464 ignition[760]: fetch: fetch complete Nov 8 00:27:55.465471 ignition[760]: fetch: fetch passed Nov 8 00:27:55.465526 ignition[760]: Ignition finished successfully Nov 8 00:27:55.544443 ignition[766]: Ignition 2.19.0 Nov 8 00:27:55.544453 ignition[766]: Stage: kargs Nov 8 00:27:55.544652 ignition[766]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:27:55.544664 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 8 00:27:55.545801 ignition[766]: kargs: kargs passed Nov 8 00:27:55.545874 ignition[766]: Ignition finished successfully Nov 8 00:27:55.598264 ignition[771]: Ignition 2.19.0 Nov 8 00:27:55.598277 ignition[771]: Stage: disks Nov 8 00:27:55.598481 ignition[771]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:27:55.598494 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 8 00:27:55.599608 ignition[771]: disks: disks passed Nov 8 00:27:55.599671 ignition[771]: Ignition finished successfully Nov 8 00:27:55.797876 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 8 00:27:55.944298 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:27:55.949284 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:27:56.107681 kernel: EXT4-fs (sda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:27:56.107571 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:27:56.117123 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:27:56.143294 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:27:56.165317 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:27:56.166185 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 8 00:27:56.227340 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (788) Nov 8 00:27:56.227396 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:27:56.227441 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:27:56.227467 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:27:56.166271 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:27:56.269440 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:27:56.269489 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:27:56.166311 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:27:56.253097 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:27:56.288291 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:27:56.293410 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:27:56.442588 initrd-setup-root[812]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:27:56.452373 initrd-setup-root[819]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:27:56.462302 initrd-setup-root[826]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:27:56.474300 initrd-setup-root[833]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:27:56.614867 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:27:56.619440 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:27:56.647940 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:27:56.670335 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:27:56.680617 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:27:56.716154 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:27:56.725348 ignition[900]: INFO : Ignition 2.19.0 Nov 8 00:27:56.725348 ignition[900]: INFO : Stage: mount Nov 8 00:27:56.725348 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:27:56.725348 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 8 00:27:56.725348 ignition[900]: INFO : mount: mount passed Nov 8 00:27:56.725348 ignition[900]: INFO : Ignition finished successfully Nov 8 00:27:56.735928 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:27:56.747305 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:27:56.998393 systemd-networkd[750]: eth0: Gained IPv6LL Nov 8 00:27:57.114460 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:27:57.160248 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (913) Nov 8 00:27:57.178508 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:27:57.178603 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:27:57.178645 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:27:57.201622 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:27:57.201709 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:27:57.204899 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:27:57.238864 ignition[930]: INFO : Ignition 2.19.0 Nov 8 00:27:57.238864 ignition[930]: INFO : Stage: files Nov 8 00:27:57.253320 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:27:57.253320 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 8 00:27:57.253320 ignition[930]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:27:57.253320 ignition[930]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:27:57.253320 ignition[930]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:27:57.310336 ignition[930]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:27:57.310336 ignition[930]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:27:57.310336 ignition[930]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:27:57.310336 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:27:57.310336 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 8 00:27:57.254780 unknown[930]: wrote ssh authorized keys file for user: core Nov 8 00:27:57.388346 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:27:57.757839 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:27:57.774321 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:27:57.774321 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:27:57.774321 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:27:57.774321 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:27:57.774321 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:27:57.774321 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:27:57.774321 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:27:57.774321 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:27:57.774321 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:27:57.774321 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:27:57.774321 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:27:57.774321 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:27:57.774321 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:27:57.774321 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 8 00:27:58.412830 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 00:27:58.911071 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:27:58.911071 ignition[930]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 8 00:27:58.948346 ignition[930]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:27:58.948346 ignition[930]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:27:58.948346 ignition[930]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 8 00:27:58.948346 ignition[930]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:27:58.948346 ignition[930]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:27:58.948346 ignition[930]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:27:58.948346 ignition[930]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:27:58.948346 ignition[930]: INFO : files: files passed Nov 8 00:27:58.948346 ignition[930]: INFO : Ignition finished successfully Nov 8 00:27:58.915388 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:27:58.935393 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:27:58.954379 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:27:58.989972 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:27:59.156309 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:27:59.156309 initrd-setup-root-after-ignition[958]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:27:58.990187 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:27:59.205476 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:27:59.055005 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:27:59.059700 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:27:59.090395 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:27:59.169023 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:27:59.169206 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:27:59.181520 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:27:59.205326 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:27:59.223436 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:27:59.230360 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:27:59.282015 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:27:59.306364 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:27:59.341930 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:27:59.353488 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:27:59.378622 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:27:59.398573 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:27:59.398780 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:27:59.431637 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:27:59.451527 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:27:59.469580 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:27:59.487602 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:27:59.507513 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:27:59.529514 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:27:59.549626 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:27:59.568555 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:27:59.588635 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:27:59.608534 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:27:59.627585 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:27:59.627846 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:27:59.658631 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:27:59.678516 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:27:59.699593 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:27:59.699773 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:27:59.717456 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:27:59.717698 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:27:59.828458 ignition[983]: INFO : Ignition 2.19.0 Nov 8 00:27:59.828458 ignition[983]: INFO : Stage: umount Nov 8 00:27:59.828458 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:27:59.828458 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 8 00:27:59.828458 ignition[983]: INFO : umount: umount passed Nov 8 00:27:59.828458 ignition[983]: INFO : Ignition finished successfully Nov 8 00:27:59.745566 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:27:59.745839 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:27:59.766596 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:27:59.766796 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:27:59.794435 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:27:59.842455 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:27:59.850510 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:27:59.850748 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:27:59.922678 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:27:59.922960 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:27:59.954530 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:27:59.955724 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:27:59.955852 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:27:59.960987 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:27:59.961100 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:27:59.979836 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:27:59.979980 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:27:59.996722 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:27:59.996787 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:28:00.014638 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:28:00.014720 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:28:00.031694 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:28:00.031777 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:28:00.048588 systemd[1]: Stopped target network.target - Network. Nov 8 00:28:00.064567 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:28:00.064655 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:28:00.097562 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:28:00.105575 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:28:00.109251 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:28:00.132455 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:28:00.140590 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:28:00.158572 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:28:00.158635 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:28:00.173587 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:28:00.173652 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:28:00.190574 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:28:00.190651 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:28:00.207601 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:28:00.207675 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:28:00.241570 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:28:00.241650 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:28:00.250827 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:28:00.255256 systemd-networkd[750]: eth0: DHCPv6 lease lost Nov 8 00:28:00.277598 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:28:00.297822 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:28:00.297962 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:28:00.319509 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:28:00.319674 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:28:00.337349 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:28:00.337406 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:28:00.351306 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:28:00.385546 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:28:00.385633 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:28:00.400642 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:28:00.400719 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:28:00.428556 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:28:00.428636 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:28:00.446492 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:28:00.446585 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:28:00.467670 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:28:00.486966 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:28:00.487155 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:28:00.501685 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:28:00.501756 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:28:00.895353 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Nov 8 00:28:00.522658 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:28:00.522744 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:28:00.539665 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:28:00.539757 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:28:00.577745 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:28:00.577852 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:28:00.602646 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:28:00.602753 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:28:00.646460 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:28:00.650504 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:28:00.650588 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:28:00.688586 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 8 00:28:00.688660 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:28:00.719561 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:28:00.719658 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:28:00.741506 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:28:00.741592 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:28:00.750049 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:28:00.750213 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:28:00.767859 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:28:00.767984 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:28:00.785725 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:28:00.808411 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:28:00.854617 systemd[1]: Switching root. Nov 8 00:28:01.134289 systemd-journald[183]: Journal stopped Nov 8 00:28:03.610564 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:28:03.610625 kernel: SELinux: policy capability open_perms=1 Nov 8 00:28:03.610647 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:28:03.610665 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:28:03.610683 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:28:03.610701 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:28:03.610722 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:28:03.610744 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:28:03.610763 kernel: audit: type=1403 audit(1762561681.518:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:28:03.610786 systemd[1]: Successfully loaded SELinux policy in 92.079ms. Nov 8 00:28:03.610809 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.072ms. Nov 8 00:28:03.610832 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:28:03.610852 systemd[1]: Detected virtualization google. Nov 8 00:28:03.610873 systemd[1]: Detected architecture x86-64. Nov 8 00:28:03.610899 systemd[1]: Detected first boot. Nov 8 00:28:03.610921 systemd[1]: Initializing machine ID from random generator. Nov 8 00:28:03.610942 zram_generator::config[1024]: No configuration found. Nov 8 00:28:03.610966 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:28:03.610987 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 8 00:28:03.611011 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 8 00:28:03.611035 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 8 00:28:03.611058 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:28:03.611080 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:28:03.611100 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:28:03.611123 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:28:03.611172 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:28:03.611199 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:28:03.611220 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:28:03.611242 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:28:03.611270 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:28:03.611293 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:28:03.611314 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:28:03.611336 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:28:03.611358 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:28:03.611384 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:28:03.611406 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 8 00:28:03.611427 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:28:03.611448 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 8 00:28:03.611470 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 8 00:28:03.611491 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 8 00:28:03.611519 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:28:03.611545 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:28:03.611567 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:28:03.611592 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:28:03.611614 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:28:03.611637 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:28:03.611659 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:28:03.611682 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:28:03.611703 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:28:03.611726 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:28:03.611753 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:28:03.611776 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:28:03.611799 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:28:03.611821 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:28:03.611842 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:28:03.611868 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:28:03.611891 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:28:03.611914 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:28:03.611937 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:28:03.611961 systemd[1]: Reached target machines.target - Containers. Nov 8 00:28:03.611985 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:28:03.612008 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:28:03.612032 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:28:03.612059 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:28:03.612082 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:28:03.612105 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:28:03.612125 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:28:03.612164 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:28:03.612186 kernel: fuse: init (API version 7.39) Nov 8 00:28:03.612207 kernel: ACPI: bus type drm_connector registered Nov 8 00:28:03.612228 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:28:03.612254 kernel: loop: module loaded Nov 8 00:28:03.612281 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:28:03.612305 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 8 00:28:03.612328 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 8 00:28:03.612351 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 8 00:28:03.612373 systemd[1]: Stopped systemd-fsck-usr.service. Nov 8 00:28:03.612393 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:28:03.612411 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:28:03.612468 systemd-journald[1111]: Collecting audit messages is disabled. Nov 8 00:28:03.612520 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:28:03.612544 systemd-journald[1111]: Journal started Nov 8 00:28:03.612591 systemd-journald[1111]: Runtime Journal (/run/log/journal/ae9a2db689e846ad94d95bf64b4204d3) is 8.0M, max 148.7M, 140.7M free. Nov 8 00:28:02.382673 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:28:02.410042 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 8 00:28:02.410884 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 8 00:28:03.655173 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:28:03.689298 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:28:03.689406 systemd[1]: verity-setup.service: Deactivated successfully. Nov 8 00:28:03.690186 systemd[1]: Stopped verity-setup.service. Nov 8 00:28:03.729929 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:28:03.739224 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:28:03.749940 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:28:03.760620 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:28:03.770613 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:28:03.780592 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:28:03.790632 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:28:03.800564 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:28:03.811880 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:28:03.823857 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:28:03.835945 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:28:03.836242 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:28:03.847859 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:28:03.848095 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:28:03.859824 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:28:03.860067 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:28:03.870727 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:28:03.870968 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:28:03.882760 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:28:03.883016 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:28:03.893757 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:28:03.894016 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:28:03.904747 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:28:03.914841 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:28:03.926755 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:28:03.938779 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:28:03.964052 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:28:03.986392 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:28:04.005335 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:28:04.016331 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:28:04.016403 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:28:04.027697 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:28:04.051462 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:28:04.068431 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:28:04.079679 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:28:04.089109 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:28:04.106953 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:28:04.118406 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:28:04.125481 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:28:04.139764 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:28:04.149206 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:28:04.166414 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:28:04.178282 systemd-journald[1111]: Time spent on flushing to /var/log/journal/ae9a2db689e846ad94d95bf64b4204d3 is 174.080ms for 930 entries. Nov 8 00:28:04.178282 systemd-journald[1111]: System Journal (/var/log/journal/ae9a2db689e846ad94d95bf64b4204d3) is 8.0M, max 584.8M, 576.8M free. Nov 8 00:28:04.407556 systemd-journald[1111]: Received client request to flush runtime journal. Nov 8 00:28:04.407638 kernel: loop0: detected capacity change from 0 to 140768 Nov 8 00:28:04.407682 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:28:04.407718 kernel: loop1: detected capacity change from 0 to 54824 Nov 8 00:28:04.193926 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:28:04.214475 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:28:04.238916 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:28:04.250540 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:28:04.263425 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:28:04.275795 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:28:04.301949 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:28:04.333405 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:28:04.344765 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:28:04.352138 systemd-tmpfiles[1143]: ACLs are not supported, ignoring. Nov 8 00:28:04.352192 systemd-tmpfiles[1143]: ACLs are not supported, ignoring. Nov 8 00:28:04.366483 udevadm[1145]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 8 00:28:04.393606 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:28:04.412386 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:28:04.422975 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:28:04.436125 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:28:04.437121 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:28:04.480218 kernel: loop2: detected capacity change from 0 to 224512 Nov 8 00:28:04.572621 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:28:04.595879 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:28:04.610949 kernel: loop3: detected capacity change from 0 to 142488 Nov 8 00:28:04.656777 systemd-tmpfiles[1165]: ACLs are not supported, ignoring. Nov 8 00:28:04.657853 systemd-tmpfiles[1165]: ACLs are not supported, ignoring. Nov 8 00:28:04.682768 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:28:04.711217 kernel: loop4: detected capacity change from 0 to 140768 Nov 8 00:28:04.765259 kernel: loop5: detected capacity change from 0 to 54824 Nov 8 00:28:04.801180 kernel: loop6: detected capacity change from 0 to 224512 Nov 8 00:28:04.849598 kernel: loop7: detected capacity change from 0 to 142488 Nov 8 00:28:04.894100 (sd-merge)[1169]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Nov 8 00:28:04.895097 (sd-merge)[1169]: Merged extensions into '/usr'. Nov 8 00:28:04.908037 systemd[1]: Reloading requested from client PID 1142 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:28:04.908062 systemd[1]: Reloading... Nov 8 00:28:05.104256 zram_generator::config[1197]: No configuration found. Nov 8 00:28:05.325068 ldconfig[1137]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:28:05.405883 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:28:05.501930 systemd[1]: Reloading finished in 592 ms. Nov 8 00:28:05.531899 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:28:05.542912 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:28:05.557369 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:28:05.575440 systemd[1]: Starting ensure-sysext.service... Nov 8 00:28:05.590319 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:28:05.611384 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:28:05.628309 systemd[1]: Reloading requested from client PID 1236 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:28:05.628338 systemd[1]: Reloading... Nov 8 00:28:05.656447 systemd-udevd[1238]: Using default interface naming scheme 'v255'. Nov 8 00:28:05.664512 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:28:05.665917 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:28:05.668299 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:28:05.668834 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Nov 8 00:28:05.668967 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Nov 8 00:28:05.678925 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:28:05.679237 systemd-tmpfiles[1237]: Skipping /boot Nov 8 00:28:05.699831 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:28:05.700019 systemd-tmpfiles[1237]: Skipping /boot Nov 8 00:28:05.785179 zram_generator::config[1261]: No configuration found. Nov 8 00:28:06.050180 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 8 00:28:06.064979 kernel: ACPI: button: Power Button [PWRF] Nov 8 00:28:06.085206 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Nov 8 00:28:06.103180 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Nov 8 00:28:06.120318 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:28:06.143169 kernel: ACPI: button: Sleep Button [SLPF] Nov 8 00:28:06.164176 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Nov 8 00:28:06.293178 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1313) Nov 8 00:28:06.304240 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:28:06.308166 kernel: EDAC MC: Ver: 3.0.0 Nov 8 00:28:06.311894 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 8 00:28:06.312249 systemd[1]: Reloading finished in 683 ms. Nov 8 00:28:06.336785 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:28:06.353645 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:28:06.393222 systemd[1]: Finished ensure-sysext.service. Nov 8 00:28:06.401731 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:28:06.436700 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Nov 8 00:28:06.448587 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:28:06.455408 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:28:06.476471 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:28:06.488591 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:28:06.498389 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:28:06.517876 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:28:06.529424 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:28:06.546431 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:28:06.552666 lvm[1349]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:28:06.563416 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:28:06.564615 augenrules[1361]: No rules Nov 8 00:28:06.579413 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 8 00:28:06.589444 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:28:06.596782 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:28:06.613742 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:28:06.634395 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:28:06.640038 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:28:06.640437 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:28:06.644886 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:28:06.656355 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:28:06.656446 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:28:06.659528 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:28:06.686849 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:28:06.698890 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:28:06.710986 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:28:06.711260 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:28:06.722770 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:28:06.723061 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:28:06.733754 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:28:06.733984 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:28:06.734661 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:28:06.734883 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:28:06.740946 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:28:06.743771 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:28:06.754017 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 8 00:28:06.766889 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:28:06.774447 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:28:06.776965 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Nov 8 00:28:06.777102 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:28:06.777243 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:28:06.781070 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:28:06.790423 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:28:06.790512 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:28:06.791304 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:28:06.793548 lvm[1389]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:28:06.839299 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:28:06.852718 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:28:06.876235 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:28:06.887877 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Nov 8 00:28:06.900654 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:28:06.995466 systemd-networkd[1371]: lo: Link UP Nov 8 00:28:06.995487 systemd-networkd[1371]: lo: Gained carrier Nov 8 00:28:06.997838 systemd-networkd[1371]: Enumeration completed Nov 8 00:28:06.998863 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:28:06.998871 systemd-networkd[1371]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:28:06.999614 systemd-networkd[1371]: eth0: Link UP Nov 8 00:28:06.999621 systemd-networkd[1371]: eth0: Gained carrier Nov 8 00:28:06.999647 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:28:06.999985 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:28:07.011237 systemd-networkd[1371]: eth0: Overlong DHCP hostname received, shortened from 'ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562.c.flatcar-212911.internal' to 'ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562' Nov 8 00:28:07.011262 systemd-networkd[1371]: eth0: DHCPv4 address 10.128.0.61/32, gateway 10.128.0.1 acquired from 169.254.169.254 Nov 8 00:28:07.012026 systemd-resolved[1372]: Positive Trust Anchors: Nov 8 00:28:07.012039 systemd-resolved[1372]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:28:07.012103 systemd-resolved[1372]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:28:07.018431 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:28:07.021057 systemd-resolved[1372]: Defaulting to hostname 'linux'. Nov 8 00:28:07.029406 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:28:07.039463 systemd[1]: Reached target network.target - Network. Nov 8 00:28:07.048315 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:28:07.059355 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:28:07.069504 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:28:07.080450 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:28:07.092608 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:28:07.102545 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:28:07.113340 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:28:07.124330 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:28:07.124403 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:28:07.133319 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:28:07.143582 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:28:07.155021 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:28:07.166723 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:28:07.177255 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:28:07.187473 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:28:07.197308 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:28:07.206354 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:28:07.206404 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:28:07.218367 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:28:07.233412 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 8 00:28:07.255699 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:28:07.281384 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:28:07.298389 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:28:07.306470 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:28:07.317915 jq[1422]: false Nov 8 00:28:07.317046 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:28:07.336402 systemd[1]: Started ntpd.service - Network Time Service. Nov 8 00:28:07.354294 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:28:07.357213 coreos-metadata[1420]: Nov 08 00:28:07.356 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Nov 8 00:28:07.364380 coreos-metadata[1420]: Nov 08 00:28:07.362 INFO Fetch successful Nov 8 00:28:07.364380 coreos-metadata[1420]: Nov 08 00:28:07.362 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Nov 8 00:28:07.364380 coreos-metadata[1420]: Nov 08 00:28:07.362 INFO Fetch successful Nov 8 00:28:07.364380 coreos-metadata[1420]: Nov 08 00:28:07.362 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Nov 8 00:28:07.369659 coreos-metadata[1420]: Nov 08 00:28:07.369 INFO Fetch successful Nov 8 00:28:07.369760 coreos-metadata[1420]: Nov 08 00:28:07.369 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Nov 8 00:28:07.370245 coreos-metadata[1420]: Nov 08 00:28:07.370 INFO Fetch successful Nov 8 00:28:07.372459 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:28:07.392596 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:28:07.398044 extend-filesystems[1425]: Found loop4 Nov 8 00:28:07.404598 extend-filesystems[1425]: Found loop5 Nov 8 00:28:07.404598 extend-filesystems[1425]: Found loop6 Nov 8 00:28:07.404598 extend-filesystems[1425]: Found loop7 Nov 8 00:28:07.404598 extend-filesystems[1425]: Found sda Nov 8 00:28:07.404598 extend-filesystems[1425]: Found sda1 Nov 8 00:28:07.404598 extend-filesystems[1425]: Found sda2 Nov 8 00:28:07.404598 extend-filesystems[1425]: Found sda3 Nov 8 00:28:07.404598 extend-filesystems[1425]: Found usr Nov 8 00:28:07.404598 extend-filesystems[1425]: Found sda4 Nov 8 00:28:07.404598 extend-filesystems[1425]: Found sda6 Nov 8 00:28:07.404598 extend-filesystems[1425]: Found sda7 Nov 8 00:28:07.404598 extend-filesystems[1425]: Found sda9 Nov 8 00:28:07.404598 extend-filesystems[1425]: Checking size of /dev/sda9 Nov 8 00:28:07.411052 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:28:07.421528 ntpd[1427]: ntpd 4.2.8p17@1.4004-o Fri Nov 7 22:06:24 UTC 2025 (1): Starting Nov 8 00:28:07.421573 ntpd[1427]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 8 00:28:07.422065 ntpd[1427]: 8 Nov 00:28:07 ntpd[1427]: ntpd 4.2.8p17@1.4004-o Fri Nov 7 22:06:24 UTC 2025 (1): Starting Nov 8 00:28:07.422065 ntpd[1427]: 8 Nov 00:28:07 ntpd[1427]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 8 00:28:07.422065 ntpd[1427]: 8 Nov 00:28:07 ntpd[1427]: ---------------------------------------------------- Nov 8 00:28:07.422065 ntpd[1427]: 8 Nov 00:28:07 ntpd[1427]: ntp-4 is maintained by Network Time Foundation, Nov 8 00:28:07.422065 ntpd[1427]: 8 Nov 00:28:07 ntpd[1427]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 8 00:28:07.422065 ntpd[1427]: 8 Nov 00:28:07 ntpd[1427]: corporation. Support and training for ntp-4 are Nov 8 00:28:07.422065 ntpd[1427]: 8 Nov 00:28:07 ntpd[1427]: available at https://www.nwtime.org/support Nov 8 00:28:07.422065 ntpd[1427]: 8 Nov 00:28:07 ntpd[1427]: ---------------------------------------------------- Nov 8 00:28:07.421587 ntpd[1427]: ---------------------------------------------------- Nov 8 00:28:07.421601 ntpd[1427]: ntp-4 is maintained by Network Time Foundation, Nov 8 00:28:07.421615 ntpd[1427]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 8 00:28:07.421630 ntpd[1427]: corporation. Support and training for ntp-4 are Nov 8 00:28:07.421643 ntpd[1427]: available at https://www.nwtime.org/support Nov 8 00:28:07.421657 ntpd[1427]: ---------------------------------------------------- Nov 8 00:28:07.424497 ntpd[1427]: proto: precision = 0.074 usec (-24) Nov 8 00:28:07.425440 ntpd[1427]: 8 Nov 00:28:07 ntpd[1427]: proto: precision = 0.074 usec (-24) Nov 8 00:28:07.425440 ntpd[1427]: 8 Nov 00:28:07 ntpd[1427]: basedate set to 2025-10-26 Nov 8 00:28:07.425440 ntpd[1427]: 8 Nov 00:28:07 ntpd[1427]: gps base set to 2025-10-26 (week 2390) Nov 8 00:28:07.424927 ntpd[1427]: basedate set to 2025-10-26 Nov 8 00:28:07.424949 ntpd[1427]: gps base set to 2025-10-26 (week 2390) Nov 8 00:28:07.427730 ntpd[1427]: Listen and drop on 0 v6wildcard [::]:123 Nov 8 00:28:07.427910 ntpd[1427]: 8 Nov 00:28:07 ntpd[1427]: Listen and drop on 0 v6wildcard [::]:123 Nov 8 00:28:07.427910 ntpd[1427]: 8 Nov 00:28:07 ntpd[1427]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 8 00:28:07.427803 ntpd[1427]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 8 00:28:07.428065 ntpd[1427]: Listen normally on 2 lo 127.0.0.1:123 Nov 8 00:28:07.428221 ntpd[1427]: 8 Nov 00:28:07 ntpd[1427]: Listen normally on 2 lo 127.0.0.1:123 Nov 8 00:28:07.428275 ntpd[1427]: Listen normally on 3 eth0 10.128.0.61:123 Nov 8 00:28:07.428470 ntpd[1427]: 8 Nov 00:28:07 ntpd[1427]: Listen normally on 3 eth0 10.128.0.61:123 Nov 8 00:28:07.428470 ntpd[1427]: 8 Nov 00:28:07 ntpd[1427]: Listen normally on 4 lo [::1]:123 Nov 8 00:28:07.428470 ntpd[1427]: 8 Nov 00:28:07 ntpd[1427]: bind(21) AF_INET6 fe80::4001:aff:fe80:3d%2#123 flags 0x11 failed: Cannot assign requested address Nov 8 00:28:07.428470 ntpd[1427]: 8 Nov 00:28:07 ntpd[1427]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:3d%2#123 Nov 8 00:28:07.428357 ntpd[1427]: Listen normally on 4 lo [::1]:123 Nov 8 00:28:07.430354 ntpd[1427]: 8 Nov 00:28:07 ntpd[1427]: failed to init interface for address fe80::4001:aff:fe80:3d%2 Nov 8 00:28:07.430354 ntpd[1427]: 8 Nov 00:28:07 ntpd[1427]: Listening on routing socket on fd #21 for interface updates Nov 8 00:28:07.428429 ntpd[1427]: bind(21) AF_INET6 fe80::4001:aff:fe80:3d%2#123 flags 0x11 failed: Cannot assign requested address Nov 8 00:28:07.428459 ntpd[1427]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:3d%2#123 Nov 8 00:28:07.428482 ntpd[1427]: failed to init interface for address fe80::4001:aff:fe80:3d%2 Nov 8 00:28:07.428528 ntpd[1427]: Listening on routing socket on fd #21 for interface updates Nov 8 00:28:07.430990 ntpd[1427]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 8 00:28:07.431032 ntpd[1427]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 8 00:28:07.431205 ntpd[1427]: 8 Nov 00:28:07 ntpd[1427]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 8 00:28:07.431205 ntpd[1427]: 8 Nov 00:28:07 ntpd[1427]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 8 00:28:07.438644 dbus-daemon[1421]: [system] SELinux support is enabled Nov 8 00:28:07.439567 extend-filesystems[1425]: Resized partition /dev/sda9 Nov 8 00:28:07.527422 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 3587067 blocks Nov 8 00:28:07.527508 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1264) Nov 8 00:28:07.445238 dbus-daemon[1421]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1371 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 8 00:28:07.527680 extend-filesystems[1445]: resize2fs 1.47.1 (20-May-2024) Nov 8 00:28:07.566627 kernel: EXT4-fs (sda9): resized filesystem to 3587067 Nov 8 00:28:07.499037 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Nov 8 00:28:07.499876 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:28:07.509465 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:28:07.528598 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:28:07.545644 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:28:07.569531 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:28:07.569813 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:28:07.571587 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:28:07.572352 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:28:07.578925 extend-filesystems[1445]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 8 00:28:07.578925 extend-filesystems[1445]: old_desc_blocks = 1, new_desc_blocks = 2 Nov 8 00:28:07.578925 extend-filesystems[1445]: The filesystem on /dev/sda9 is now 3587067 (4k) blocks long. Nov 8 00:28:07.583135 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:28:07.626520 jq[1451]: true Nov 8 00:28:07.626852 extend-filesystems[1425]: Resized filesystem in /dev/sda9 Nov 8 00:28:07.583418 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:28:07.612742 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:28:07.613023 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:28:07.638931 update_engine[1449]: I20251108 00:28:07.638721 1449 main.cc:92] Flatcar Update Engine starting Nov 8 00:28:07.642101 systemd-logind[1440]: Watching system buttons on /dev/input/event1 (Power Button) Nov 8 00:28:07.642137 systemd-logind[1440]: Watching system buttons on /dev/input/event2 (Sleep Button) Nov 8 00:28:07.642635 systemd-logind[1440]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 8 00:28:07.643251 systemd-logind[1440]: New seat seat0. Nov 8 00:28:07.643839 update_engine[1449]: I20251108 00:28:07.643657 1449 update_check_scheduler.cc:74] Next update check in 4m38s Nov 8 00:28:07.647583 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:28:07.733016 jq[1458]: true Nov 8 00:28:07.736429 dbus-daemon[1421]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 8 00:28:07.740873 (ntainerd)[1463]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:28:07.770426 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 8 00:28:07.789883 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:28:07.802173 tar[1457]: linux-amd64/LICENSE Nov 8 00:28:07.802173 tar[1457]: linux-amd64/helm Nov 8 00:28:07.808566 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:28:07.819044 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:28:07.819344 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:28:07.819590 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:28:07.842532 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 8 00:28:07.850658 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:28:07.850935 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:28:07.873029 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:28:07.966936 bash[1491]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:28:07.971656 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:28:08.006373 systemd[1]: Starting sshkeys.service... Nov 8 00:28:08.070284 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 8 00:28:08.097683 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 8 00:28:08.226070 dbus-daemon[1421]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 8 00:28:08.226461 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 8 00:28:08.230016 dbus-daemon[1421]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1480 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 8 00:28:08.252399 systemd[1]: Starting polkit.service - Authorization Manager... Nov 8 00:28:08.274818 coreos-metadata[1494]: Nov 08 00:28:08.273 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Nov 8 00:28:08.277397 coreos-metadata[1494]: Nov 08 00:28:08.275 INFO Fetch failed with 404: resource not found Nov 8 00:28:08.277397 coreos-metadata[1494]: Nov 08 00:28:08.275 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Nov 8 00:28:08.278047 coreos-metadata[1494]: Nov 08 00:28:08.277 INFO Fetch successful Nov 8 00:28:08.278047 coreos-metadata[1494]: Nov 08 00:28:08.277 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Nov 8 00:28:08.279231 coreos-metadata[1494]: Nov 08 00:28:08.278 INFO Fetch failed with 404: resource not found Nov 8 00:28:08.279231 coreos-metadata[1494]: Nov 08 00:28:08.278 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Nov 8 00:28:08.279682 coreos-metadata[1494]: Nov 08 00:28:08.279 INFO Fetch failed with 404: resource not found Nov 8 00:28:08.279682 coreos-metadata[1494]: Nov 08 00:28:08.279 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Nov 8 00:28:08.284245 coreos-metadata[1494]: Nov 08 00:28:08.282 INFO Fetch successful Nov 8 00:28:08.285436 unknown[1494]: wrote ssh authorized keys file for user: core Nov 8 00:28:08.335688 locksmithd[1484]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:28:08.352037 update-ssh-keys[1508]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:28:08.352234 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 8 00:28:08.373668 systemd[1]: Finished sshkeys.service. Nov 8 00:28:08.409444 polkitd[1506]: Started polkitd version 121 Nov 8 00:28:08.424843 ntpd[1427]: bind(24) AF_INET6 fe80::4001:aff:fe80:3d%2#123 flags 0x11 failed: Cannot assign requested address Nov 8 00:28:08.425457 ntpd[1427]: 8 Nov 00:28:08 ntpd[1427]: bind(24) AF_INET6 fe80::4001:aff:fe80:3d%2#123 flags 0x11 failed: Cannot assign requested address Nov 8 00:28:08.425457 ntpd[1427]: 8 Nov 00:28:08 ntpd[1427]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:3d%2#123 Nov 8 00:28:08.425457 ntpd[1427]: 8 Nov 00:28:08 ntpd[1427]: failed to init interface for address fe80::4001:aff:fe80:3d%2 Nov 8 00:28:08.424892 ntpd[1427]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:3d%2#123 Nov 8 00:28:08.424914 ntpd[1427]: failed to init interface for address fe80::4001:aff:fe80:3d%2 Nov 8 00:28:08.432034 polkitd[1506]: Loading rules from directory /etc/polkit-1/rules.d Nov 8 00:28:08.436596 polkitd[1506]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 8 00:28:08.441829 polkitd[1506]: Finished loading, compiling and executing 2 rules Nov 8 00:28:08.443795 dbus-daemon[1421]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 8 00:28:08.444480 systemd[1]: Started polkit.service - Authorization Manager. Nov 8 00:28:08.446534 polkitd[1506]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 8 00:28:08.487568 systemd-hostnamed[1480]: Hostname set to (transient) Nov 8 00:28:08.488786 systemd-resolved[1372]: System hostname changed to 'ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562'. Nov 8 00:28:08.519198 systemd-networkd[1371]: eth0: Gained IPv6LL Nov 8 00:28:08.528248 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:28:08.539963 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:28:08.543827 containerd[1463]: time="2025-11-08T00:28:08.543716794Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:28:08.561473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:28:08.578787 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:28:08.594540 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Nov 8 00:28:08.645301 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:28:08.651012 init.sh[1524]: + '[' -e /etc/default/instance_configs.cfg.template ']' Nov 8 00:28:08.654171 init.sh[1524]: + echo -e '[InstanceSetup]\nset_host_keys = false' Nov 8 00:28:08.654171 init.sh[1524]: + /usr/bin/google_instance_setup Nov 8 00:28:08.674284 containerd[1463]: time="2025-11-08T00:28:08.674195256Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:28:08.680506 containerd[1463]: time="2025-11-08T00:28:08.680454040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:28:08.680506 containerd[1463]: time="2025-11-08T00:28:08.680505237Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:28:08.680650 containerd[1463]: time="2025-11-08T00:28:08.680530553Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:28:08.680757 containerd[1463]: time="2025-11-08T00:28:08.680730564Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:28:08.680838 containerd[1463]: time="2025-11-08T00:28:08.680768162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:28:08.680972 containerd[1463]: time="2025-11-08T00:28:08.680879198Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:28:08.680972 containerd[1463]: time="2025-11-08T00:28:08.680909590Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:28:08.683125 containerd[1463]: time="2025-11-08T00:28:08.681238080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:28:08.683125 containerd[1463]: time="2025-11-08T00:28:08.681268393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:28:08.683125 containerd[1463]: time="2025-11-08T00:28:08.681291947Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:28:08.683125 containerd[1463]: time="2025-11-08T00:28:08.681311539Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:28:08.683125 containerd[1463]: time="2025-11-08T00:28:08.681447387Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:28:08.683125 containerd[1463]: time="2025-11-08T00:28:08.681742718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:28:08.683125 containerd[1463]: time="2025-11-08T00:28:08.681939112Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:28:08.683125 containerd[1463]: time="2025-11-08T00:28:08.681965177Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:28:08.683125 containerd[1463]: time="2025-11-08T00:28:08.682097360Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:28:08.688491 containerd[1463]: time="2025-11-08T00:28:08.688036325Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:28:08.699836 containerd[1463]: time="2025-11-08T00:28:08.698788087Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:28:08.699836 containerd[1463]: time="2025-11-08T00:28:08.698884751Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:28:08.699836 containerd[1463]: time="2025-11-08T00:28:08.698913753Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:28:08.699836 containerd[1463]: time="2025-11-08T00:28:08.699022079Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:28:08.699836 containerd[1463]: time="2025-11-08T00:28:08.699059999Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:28:08.699836 containerd[1463]: time="2025-11-08T00:28:08.699313059Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:28:08.701166 containerd[1463]: time="2025-11-08T00:28:08.700243993Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:28:08.701166 containerd[1463]: time="2025-11-08T00:28:08.700450019Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:28:08.701166 containerd[1463]: time="2025-11-08T00:28:08.700476940Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:28:08.701166 containerd[1463]: time="2025-11-08T00:28:08.700502935Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:28:08.701166 containerd[1463]: time="2025-11-08T00:28:08.700528889Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:28:08.701166 containerd[1463]: time="2025-11-08T00:28:08.700551194Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:28:08.701166 containerd[1463]: time="2025-11-08T00:28:08.700572962Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:28:08.701166 containerd[1463]: time="2025-11-08T00:28:08.700597516Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:28:08.701166 containerd[1463]: time="2025-11-08T00:28:08.700619047Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:28:08.701166 containerd[1463]: time="2025-11-08T00:28:08.700641735Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:28:08.701166 containerd[1463]: time="2025-11-08T00:28:08.700662726Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:28:08.701166 containerd[1463]: time="2025-11-08T00:28:08.700686650Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:28:08.701166 containerd[1463]: time="2025-11-08T00:28:08.700737153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:28:08.701166 containerd[1463]: time="2025-11-08T00:28:08.700766583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:28:08.701801 containerd[1463]: time="2025-11-08T00:28:08.700787820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:28:08.701801 containerd[1463]: time="2025-11-08T00:28:08.700811543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:28:08.701801 containerd[1463]: time="2025-11-08T00:28:08.700831127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:28:08.701801 containerd[1463]: time="2025-11-08T00:28:08.700854427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:28:08.701801 containerd[1463]: time="2025-11-08T00:28:08.700877372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:28:08.701801 containerd[1463]: time="2025-11-08T00:28:08.700899169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:28:08.701801 containerd[1463]: time="2025-11-08T00:28:08.700922165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:28:08.701801 containerd[1463]: time="2025-11-08T00:28:08.700946839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:28:08.701801 containerd[1463]: time="2025-11-08T00:28:08.700967189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:28:08.701801 containerd[1463]: time="2025-11-08T00:28:08.700989755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:28:08.701801 containerd[1463]: time="2025-11-08T00:28:08.701011634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:28:08.701801 containerd[1463]: time="2025-11-08T00:28:08.701036708Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:28:08.701801 containerd[1463]: time="2025-11-08T00:28:08.701079789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:28:08.701801 containerd[1463]: time="2025-11-08T00:28:08.701101277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:28:08.707173 containerd[1463]: time="2025-11-08T00:28:08.701120708Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:28:08.707173 containerd[1463]: time="2025-11-08T00:28:08.705667975Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:28:08.707173 containerd[1463]: time="2025-11-08T00:28:08.705794590Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:28:08.707173 containerd[1463]: time="2025-11-08T00:28:08.705817731Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:28:08.707173 containerd[1463]: time="2025-11-08T00:28:08.705841168Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:28:08.707173 containerd[1463]: time="2025-11-08T00:28:08.705860438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:28:08.707173 containerd[1463]: time="2025-11-08T00:28:08.705889575Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:28:08.707173 containerd[1463]: time="2025-11-08T00:28:08.705907382Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:28:08.707173 containerd[1463]: time="2025-11-08T00:28:08.705926638Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:28:08.707639 containerd[1463]: time="2025-11-08T00:28:08.706435528Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:28:08.707639 containerd[1463]: time="2025-11-08T00:28:08.706538042Z" level=info msg="Connect containerd service" Nov 8 00:28:08.707639 containerd[1463]: time="2025-11-08T00:28:08.706590254Z" level=info msg="using legacy CRI server" Nov 8 00:28:08.707639 containerd[1463]: time="2025-11-08T00:28:08.706603030Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:28:08.707639 containerd[1463]: time="2025-11-08T00:28:08.706774617Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:28:08.715164 containerd[1463]: time="2025-11-08T00:28:08.712426973Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:28:08.715164 containerd[1463]: time="2025-11-08T00:28:08.712587280Z" level=info msg="Start subscribing containerd event" Nov 8 00:28:08.715164 containerd[1463]: time="2025-11-08T00:28:08.712666163Z" level=info msg="Start recovering state" Nov 8 00:28:08.715164 containerd[1463]: time="2025-11-08T00:28:08.712757807Z" level=info msg="Start event monitor" Nov 8 00:28:08.715164 containerd[1463]: time="2025-11-08T00:28:08.712788479Z" level=info msg="Start snapshots syncer" Nov 8 00:28:08.715164 containerd[1463]: time="2025-11-08T00:28:08.712803112Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:28:08.715164 containerd[1463]: time="2025-11-08T00:28:08.712815749Z" level=info msg="Start streaming server" Nov 8 00:28:08.719162 containerd[1463]: time="2025-11-08T00:28:08.717562125Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:28:08.719162 containerd[1463]: time="2025-11-08T00:28:08.717715207Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:28:08.721719 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:28:08.723590 containerd[1463]: time="2025-11-08T00:28:08.723547336Z" level=info msg="containerd successfully booted in 0.185023s" Nov 8 00:28:09.502669 sshd_keygen[1453]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:28:09.596467 instance-setup[1533]: INFO Running google_set_multiqueue. Nov 8 00:28:09.623707 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:28:09.630664 instance-setup[1533]: INFO Set channels for eth0 to 2. Nov 8 00:28:09.644283 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:28:09.648779 instance-setup[1533]: INFO Setting /proc/irq/27/smp_affinity_list to 0 for device virtio1. Nov 8 00:28:09.651557 instance-setup[1533]: INFO /proc/irq/27/smp_affinity_list: real affinity 0 Nov 8 00:28:09.652873 instance-setup[1533]: INFO Setting /proc/irq/28/smp_affinity_list to 0 for device virtio1. Nov 8 00:28:09.657540 instance-setup[1533]: INFO /proc/irq/28/smp_affinity_list: real affinity 0 Nov 8 00:28:09.658254 instance-setup[1533]: INFO Setting /proc/irq/29/smp_affinity_list to 1 for device virtio1. Nov 8 00:28:09.659559 systemd[1]: Started sshd@0-10.128.0.61:22-139.178.89.65:60320.service - OpenSSH per-connection server daemon (139.178.89.65:60320). Nov 8 00:28:09.663503 instance-setup[1533]: INFO /proc/irq/29/smp_affinity_list: real affinity 1 Nov 8 00:28:09.665670 instance-setup[1533]: INFO Setting /proc/irq/30/smp_affinity_list to 1 for device virtio1. Nov 8 00:28:09.669588 instance-setup[1533]: INFO /proc/irq/30/smp_affinity_list: real affinity 1 Nov 8 00:28:09.693915 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:28:09.694230 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:28:09.697953 tar[1457]: linux-amd64/README.md Nov 8 00:28:09.713933 instance-setup[1533]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Nov 8 00:28:09.726893 instance-setup[1533]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Nov 8 00:28:09.730528 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:28:09.733105 instance-setup[1533]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Nov 8 00:28:09.733193 instance-setup[1533]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Nov 8 00:28:09.741841 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:28:09.759842 init.sh[1524]: + /usr/bin/google_metadata_script_runner --script-type startup Nov 8 00:28:09.790069 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:28:09.811385 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:28:09.830671 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 8 00:28:09.840822 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:28:09.959378 startup-script[1584]: INFO Starting startup scripts. Nov 8 00:28:09.966749 startup-script[1584]: INFO No startup scripts found in metadata. Nov 8 00:28:09.966837 startup-script[1584]: INFO Finished running startup scripts. Nov 8 00:28:09.995506 init.sh[1524]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Nov 8 00:28:09.995506 init.sh[1524]: + daemon_pids=() Nov 8 00:28:09.995506 init.sh[1524]: + for d in accounts clock_skew network Nov 8 00:28:09.997057 init.sh[1524]: + daemon_pids+=($!) Nov 8 00:28:09.997057 init.sh[1524]: + for d in accounts clock_skew network Nov 8 00:28:09.997057 init.sh[1524]: + daemon_pids+=($!) Nov 8 00:28:09.997057 init.sh[1524]: + for d in accounts clock_skew network Nov 8 00:28:09.997057 init.sh[1524]: + daemon_pids+=($!) Nov 8 00:28:09.997057 init.sh[1524]: + NOTIFY_SOCKET=/run/systemd/notify Nov 8 00:28:09.997057 init.sh[1524]: + /usr/bin/systemd-notify --ready Nov 8 00:28:09.997456 init.sh[1591]: + /usr/bin/google_clock_skew_daemon Nov 8 00:28:09.998258 init.sh[1592]: + /usr/bin/google_network_daemon Nov 8 00:28:09.999172 init.sh[1590]: + /usr/bin/google_accounts_daemon Nov 8 00:28:10.030730 systemd[1]: Started oem-gce.service - GCE Linux Agent. Nov 8 00:28:10.051495 init.sh[1524]: + wait -n 1590 1591 1592 Nov 8 00:28:10.084035 sshd[1563]: Accepted publickey for core from 139.178.89.65 port 60320 ssh2: RSA SHA256:ogwUEVYP/0oM1umZiqSKcXs0yHGk8j4B+2jq1gLm8nQ Nov 8 00:28:10.086769 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:10.117357 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:28:10.137640 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:28:10.164278 systemd-logind[1440]: New session 1 of user core. Nov 8 00:28:10.190004 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:28:10.213631 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:28:10.257308 (systemd)[1596]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:28:10.541225 systemd[1596]: Queued start job for default target default.target. Nov 8 00:28:10.547604 systemd[1596]: Created slice app.slice - User Application Slice. Nov 8 00:28:10.547661 systemd[1596]: Reached target paths.target - Paths. Nov 8 00:28:10.547686 systemd[1596]: Reached target timers.target - Timers. Nov 8 00:28:10.557362 systemd[1596]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:28:10.585897 systemd[1596]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:28:10.587127 systemd[1596]: Reached target sockets.target - Sockets. Nov 8 00:28:10.587179 systemd[1596]: Reached target basic.target - Basic System. Nov 8 00:28:10.587267 systemd[1596]: Reached target default.target - Main User Target. Nov 8 00:28:10.587326 systemd[1596]: Startup finished in 307ms. Nov 8 00:28:10.589408 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:28:10.606370 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:28:10.630288 google-networking[1592]: INFO Starting Google Networking daemon. Nov 8 00:28:10.635948 groupadd[1609]: group added to /etc/group: name=google-sudoers, GID=1000 Nov 8 00:28:10.638390 google-clock-skew[1591]: INFO Starting Google Clock Skew daemon. Nov 8 00:28:10.642933 groupadd[1609]: group added to /etc/gshadow: name=google-sudoers Nov 8 00:28:10.650215 google-clock-skew[1591]: INFO Clock drift token has changed: 0. Nov 8 00:28:10.703553 groupadd[1609]: new group: name=google-sudoers, GID=1000 Nov 8 00:28:10.735928 google-accounts[1590]: INFO Starting Google Accounts daemon. Nov 8 00:28:10.749995 google-accounts[1590]: WARNING OS Login not installed. Nov 8 00:28:10.751523 google-accounts[1590]: INFO Creating a new user account for 0. Nov 8 00:28:10.758878 init.sh[1621]: useradd: invalid user name '0': use --badname to ignore Nov 8 00:28:10.760080 google-accounts[1590]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Nov 8 00:28:10.859786 systemd[1]: Started sshd@1-10.128.0.61:22-139.178.89.65:41074.service - OpenSSH per-connection server daemon (139.178.89.65:41074). Nov 8 00:28:11.096384 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:28:11.108368 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:28:11.113846 (kubelet)[1632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:28:11.118552 systemd[1]: Startup finished in 1.027s (kernel) + 9.739s (initrd) + 9.681s (userspace) = 20.448s. Nov 8 00:28:11.177231 sshd[1625]: Accepted publickey for core from 139.178.89.65 port 41074 ssh2: RSA SHA256:ogwUEVYP/0oM1umZiqSKcXs0yHGk8j4B+2jq1gLm8nQ Nov 8 00:28:11.179826 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:11.187838 systemd-logind[1440]: New session 2 of user core. Nov 8 00:28:11.194550 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:28:11.394285 sshd[1625]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:11.400409 systemd[1]: sshd@1-10.128.0.61:22-139.178.89.65:41074.service: Deactivated successfully. Nov 8 00:28:11.403885 systemd[1]: session-2.scope: Deactivated successfully. Nov 8 00:28:11.406451 systemd-logind[1440]: Session 2 logged out. Waiting for processes to exit. Nov 8 00:28:11.408828 systemd-logind[1440]: Removed session 2. Nov 8 00:28:11.000177 systemd-resolved[1372]: Clock change detected. Flushing caches. Nov 8 00:28:11.018277 systemd-journald[1111]: Time jumped backwards, rotating. Nov 8 00:28:11.000496 google-clock-skew[1591]: INFO Synced system time with hardware clock. Nov 8 00:28:11.018500 ntpd[1427]: 8 Nov 00:28:11 ntpd[1427]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:3d%2]:123 Nov 8 00:28:11.011655 ntpd[1427]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:3d%2]:123 Nov 8 00:28:11.044161 systemd[1]: Started sshd@2-10.128.0.61:22-139.178.89.65:41082.service - OpenSSH per-connection server daemon (139.178.89.65:41082). Nov 8 00:28:11.333798 sshd[1647]: Accepted publickey for core from 139.178.89.65 port 41082 ssh2: RSA SHA256:ogwUEVYP/0oM1umZiqSKcXs0yHGk8j4B+2jq1gLm8nQ Nov 8 00:28:11.335315 sshd[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:11.344169 systemd-logind[1440]: New session 3 of user core. Nov 8 00:28:11.349965 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:28:11.544087 sshd[1647]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:11.549557 systemd[1]: sshd@2-10.128.0.61:22-139.178.89.65:41082.service: Deactivated successfully. Nov 8 00:28:11.553406 systemd[1]: session-3.scope: Deactivated successfully. Nov 8 00:28:11.555843 systemd-logind[1440]: Session 3 logged out. Waiting for processes to exit. Nov 8 00:28:11.557535 systemd-logind[1440]: Removed session 3. Nov 8 00:28:11.600132 systemd[1]: Started sshd@3-10.128.0.61:22-139.178.89.65:41096.service - OpenSSH per-connection server daemon (139.178.89.65:41096). Nov 8 00:28:11.624445 kubelet[1632]: E1108 00:28:11.624369 1632 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:28:11.628409 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:28:11.628699 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:28:11.629602 systemd[1]: kubelet.service: Consumed 1.317s CPU time. Nov 8 00:28:11.891883 sshd[1656]: Accepted publickey for core from 139.178.89.65 port 41096 ssh2: RSA SHA256:ogwUEVYP/0oM1umZiqSKcXs0yHGk8j4B+2jq1gLm8nQ Nov 8 00:28:11.893768 sshd[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:11.899495 systemd-logind[1440]: New session 4 of user core. Nov 8 00:28:11.906981 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:28:12.105287 sshd[1656]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:12.110008 systemd[1]: sshd@3-10.128.0.61:22-139.178.89.65:41096.service: Deactivated successfully. Nov 8 00:28:12.112693 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:28:12.114994 systemd-logind[1440]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:28:12.116705 systemd-logind[1440]: Removed session 4. Nov 8 00:28:12.164154 systemd[1]: Started sshd@4-10.128.0.61:22-139.178.89.65:41098.service - OpenSSH per-connection server daemon (139.178.89.65:41098). Nov 8 00:28:12.453689 sshd[1664]: Accepted publickey for core from 139.178.89.65 port 41098 ssh2: RSA SHA256:ogwUEVYP/0oM1umZiqSKcXs0yHGk8j4B+2jq1gLm8nQ Nov 8 00:28:12.455557 sshd[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:12.461801 systemd-logind[1440]: New session 5 of user core. Nov 8 00:28:12.469091 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:28:12.647955 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:28:12.648506 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:28:12.665695 sudo[1667]: pam_unix(sudo:session): session closed for user root Nov 8 00:28:12.708851 sshd[1664]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:12.713905 systemd[1]: sshd@4-10.128.0.61:22-139.178.89.65:41098.service: Deactivated successfully. Nov 8 00:28:12.716301 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:28:12.718252 systemd-logind[1440]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:28:12.719778 systemd-logind[1440]: Removed session 5. Nov 8 00:28:12.765122 systemd[1]: Started sshd@5-10.128.0.61:22-139.178.89.65:41100.service - OpenSSH per-connection server daemon (139.178.89.65:41100). Nov 8 00:28:13.043094 sshd[1672]: Accepted publickey for core from 139.178.89.65 port 41100 ssh2: RSA SHA256:ogwUEVYP/0oM1umZiqSKcXs0yHGk8j4B+2jq1gLm8nQ Nov 8 00:28:13.045105 sshd[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:13.051449 systemd-logind[1440]: New session 6 of user core. Nov 8 00:28:13.058952 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:28:13.220833 sudo[1676]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:28:13.221349 sudo[1676]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:28:13.226248 sudo[1676]: pam_unix(sudo:session): session closed for user root Nov 8 00:28:13.239627 sudo[1675]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:28:13.240146 sudo[1675]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:28:13.256139 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:28:13.260339 auditctl[1679]: No rules Nov 8 00:28:13.260896 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:28:13.261170 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:28:13.269327 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:28:13.301466 augenrules[1698]: No rules Nov 8 00:28:13.304002 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:28:13.305341 sudo[1675]: pam_unix(sudo:session): session closed for user root Nov 8 00:28:13.348642 sshd[1672]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:13.354183 systemd[1]: sshd@5-10.128.0.61:22-139.178.89.65:41100.service: Deactivated successfully. Nov 8 00:28:13.356687 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:28:13.358062 systemd-logind[1440]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:28:13.359975 systemd-logind[1440]: Removed session 6. Nov 8 00:28:13.415162 systemd[1]: Started sshd@6-10.128.0.61:22-139.178.89.65:41102.service - OpenSSH per-connection server daemon (139.178.89.65:41102). Nov 8 00:28:13.696736 sshd[1706]: Accepted publickey for core from 139.178.89.65 port 41102 ssh2: RSA SHA256:ogwUEVYP/0oM1umZiqSKcXs0yHGk8j4B+2jq1gLm8nQ Nov 8 00:28:13.698600 sshd[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:13.704996 systemd-logind[1440]: New session 7 of user core. Nov 8 00:28:13.715952 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:28:13.878252 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:28:13.878763 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:28:14.324147 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:28:14.329060 (dockerd)[1726]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:28:14.785847 dockerd[1726]: time="2025-11-08T00:28:14.785694814Z" level=info msg="Starting up" Nov 8 00:28:15.061686 dockerd[1726]: time="2025-11-08T00:28:15.061335098Z" level=info msg="Loading containers: start." Nov 8 00:28:15.218756 kernel: Initializing XFRM netlink socket Nov 8 00:28:15.334976 systemd-networkd[1371]: docker0: Link UP Nov 8 00:28:15.353857 dockerd[1726]: time="2025-11-08T00:28:15.353810747Z" level=info msg="Loading containers: done." Nov 8 00:28:15.374825 dockerd[1726]: time="2025-11-08T00:28:15.374194722Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:28:15.374825 dockerd[1726]: time="2025-11-08T00:28:15.374339276Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:28:15.374825 dockerd[1726]: time="2025-11-08T00:28:15.374483279Z" level=info msg="Daemon has completed initialization" Nov 8 00:28:15.415438 dockerd[1726]: time="2025-11-08T00:28:15.415359261Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:28:15.415847 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:28:16.401182 containerd[1463]: time="2025-11-08T00:28:16.401128133Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 8 00:28:16.914467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount890256177.mount: Deactivated successfully. Nov 8 00:28:18.596448 containerd[1463]: time="2025-11-08T00:28:18.596375392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:18.598177 containerd[1463]: time="2025-11-08T00:28:18.597760506Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28845499" Nov 8 00:28:18.599940 containerd[1463]: time="2025-11-08T00:28:18.599889266Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:18.604545 containerd[1463]: time="2025-11-08T00:28:18.604474302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:18.606091 containerd[1463]: time="2025-11-08T00:28:18.605873643Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.20468641s" Nov 8 00:28:18.606091 containerd[1463]: time="2025-11-08T00:28:18.605924180Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 8 00:28:18.607473 containerd[1463]: time="2025-11-08T00:28:18.607432205Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 8 00:28:20.149860 containerd[1463]: time="2025-11-08T00:28:20.149793247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:20.151571 containerd[1463]: time="2025-11-08T00:28:20.151505496Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24788961" Nov 8 00:28:20.154746 containerd[1463]: time="2025-11-08T00:28:20.152588720Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:20.156498 containerd[1463]: time="2025-11-08T00:28:20.156455842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:20.158020 containerd[1463]: time="2025-11-08T00:28:20.157975000Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.550489377s" Nov 8 00:28:20.158187 containerd[1463]: time="2025-11-08T00:28:20.158160543Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 8 00:28:20.159381 containerd[1463]: time="2025-11-08T00:28:20.159339987Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 8 00:28:21.379255 containerd[1463]: time="2025-11-08T00:28:21.379188387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:21.380886 containerd[1463]: time="2025-11-08T00:28:21.380762040Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19178205" Nov 8 00:28:21.383749 containerd[1463]: time="2025-11-08T00:28:21.382078141Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:21.385991 containerd[1463]: time="2025-11-08T00:28:21.385952863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:21.387422 containerd[1463]: time="2025-11-08T00:28:21.387380212Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.228000876s" Nov 8 00:28:21.387565 containerd[1463]: time="2025-11-08T00:28:21.387537841Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 8 00:28:21.389049 containerd[1463]: time="2025-11-08T00:28:21.388985845Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 8 00:28:21.878995 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:28:21.888077 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:28:22.319992 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:28:22.330743 (kubelet)[1943]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:28:22.446177 kubelet[1943]: E1108 00:28:22.445713 1943 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:28:22.451704 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:28:22.451954 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:28:22.714475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1490412588.mount: Deactivated successfully. Nov 8 00:28:23.468381 containerd[1463]: time="2025-11-08T00:28:23.468302588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:23.469629 containerd[1463]: time="2025-11-08T00:28:23.469562400Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30926101" Nov 8 00:28:23.470878 containerd[1463]: time="2025-11-08T00:28:23.470825872Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:23.475114 containerd[1463]: time="2025-11-08T00:28:23.473836351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:23.475114 containerd[1463]: time="2025-11-08T00:28:23.474888772Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.085835289s" Nov 8 00:28:23.475114 containerd[1463]: time="2025-11-08T00:28:23.474939915Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 8 00:28:23.476283 containerd[1463]: time="2025-11-08T00:28:23.475849706Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 8 00:28:23.880993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2257807347.mount: Deactivated successfully. Nov 8 00:28:25.074709 containerd[1463]: time="2025-11-08T00:28:25.074634008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:25.077423 containerd[1463]: time="2025-11-08T00:28:25.077349368Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18571883" Nov 8 00:28:25.079289 containerd[1463]: time="2025-11-08T00:28:25.079243215Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:25.090903 containerd[1463]: time="2025-11-08T00:28:25.090813737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:25.092630 containerd[1463]: time="2025-11-08T00:28:25.092412119Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.616518199s" Nov 8 00:28:25.092630 containerd[1463]: time="2025-11-08T00:28:25.092462467Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 8 00:28:25.093573 containerd[1463]: time="2025-11-08T00:28:25.093333293Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 8 00:28:25.490937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount663889249.mount: Deactivated successfully. Nov 8 00:28:25.496478 containerd[1463]: time="2025-11-08T00:28:25.496422695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:25.497603 containerd[1463]: time="2025-11-08T00:28:25.497534773Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Nov 8 00:28:25.500371 containerd[1463]: time="2025-11-08T00:28:25.498639311Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:25.501761 containerd[1463]: time="2025-11-08T00:28:25.501528211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:25.503440 containerd[1463]: time="2025-11-08T00:28:25.502841797Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 409.467174ms" Nov 8 00:28:25.503440 containerd[1463]: time="2025-11-08T00:28:25.502888615Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 8 00:28:25.504111 containerd[1463]: time="2025-11-08T00:28:25.504070273Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 8 00:28:25.884810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3937764829.mount: Deactivated successfully. Nov 8 00:28:28.182320 containerd[1463]: time="2025-11-08T00:28:28.182247276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:28.189649 containerd[1463]: time="2025-11-08T00:28:28.189576442Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57689565" Nov 8 00:28:28.189831 containerd[1463]: time="2025-11-08T00:28:28.189789328Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:28.193910 containerd[1463]: time="2025-11-08T00:28:28.193867842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:28.195569 containerd[1463]: time="2025-11-08T00:28:28.195525775Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.691419075s" Nov 8 00:28:28.195753 containerd[1463]: time="2025-11-08T00:28:28.195698592Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 8 00:28:31.329572 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:28:31.336092 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:28:31.382109 systemd[1]: Reloading requested from client PID 2091 ('systemctl') (unit session-7.scope)... Nov 8 00:28:31.382135 systemd[1]: Reloading... Nov 8 00:28:31.569781 zram_generator::config[2132]: No configuration found. Nov 8 00:28:31.722755 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:28:31.836843 systemd[1]: Reloading finished in 454 ms. Nov 8 00:28:31.895996 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:28:31.896127 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:28:31.896447 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:28:31.904174 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:28:32.421959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:28:32.431291 (kubelet)[2182]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:28:32.501630 kubelet[2182]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:28:32.501630 kubelet[2182]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:28:32.501630 kubelet[2182]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:28:32.502522 kubelet[2182]: I1108 00:28:32.501710 2182 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:28:33.173467 kubelet[2182]: I1108 00:28:33.173404 2182 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:28:33.173467 kubelet[2182]: I1108 00:28:33.173445 2182 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:28:33.173937 kubelet[2182]: I1108 00:28:33.173898 2182 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:28:33.225523 kubelet[2182]: E1108 00:28:33.225462 2182 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.61:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.61:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:28:33.229161 kubelet[2182]: I1108 00:28:33.228960 2182 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:28:33.237265 kubelet[2182]: E1108 00:28:33.237216 2182 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:28:33.237265 kubelet[2182]: I1108 00:28:33.237265 2182 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:28:33.240875 kubelet[2182]: I1108 00:28:33.240844 2182 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:28:33.241273 kubelet[2182]: I1108 00:28:33.241225 2182 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:28:33.241761 kubelet[2182]: I1108 00:28:33.241343 2182 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:28:33.241761 kubelet[2182]: I1108 00:28:33.241760 2182 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:28:33.242024 kubelet[2182]: I1108 00:28:33.241781 2182 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:28:33.242024 kubelet[2182]: I1108 00:28:33.241979 2182 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:28:33.247269 kubelet[2182]: I1108 00:28:33.247228 2182 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:28:33.247269 kubelet[2182]: I1108 00:28:33.247278 2182 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:28:33.247448 kubelet[2182]: I1108 00:28:33.247307 2182 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:28:33.247448 kubelet[2182]: I1108 00:28:33.247324 2182 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:28:33.256749 kubelet[2182]: W1108 00:28:33.255138 2182 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562&limit=500&resourceVersion=0": dial tcp 10.128.0.61:6443: connect: connection refused Nov 8 00:28:33.256749 kubelet[2182]: E1108 00:28:33.255243 2182 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562&limit=500&resourceVersion=0\": dial tcp 10.128.0.61:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:28:33.256749 kubelet[2182]: W1108 00:28:33.255375 2182 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.61:6443: connect: connection refused Nov 8 00:28:33.256749 kubelet[2182]: E1108 00:28:33.255429 2182 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.61:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:28:33.256749 kubelet[2182]: I1108 00:28:33.256101 2182 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:28:33.257211 kubelet[2182]: I1108 00:28:33.257183 2182 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:28:33.258952 kubelet[2182]: W1108 00:28:33.258907 2182 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:28:33.264109 kubelet[2182]: I1108 00:28:33.264076 2182 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:28:33.264340 kubelet[2182]: I1108 00:28:33.264301 2182 server.go:1287] "Started kubelet" Nov 8 00:28:33.276514 kubelet[2182]: I1108 00:28:33.276385 2182 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:28:33.279750 kubelet[2182]: I1108 00:28:33.279700 2182 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:28:33.280973 kubelet[2182]: E1108 00:28:33.278789 2182 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.61:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.61:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562.1875e08a2dd1b629 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562,UID:ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562,},FirstTimestamp:2025-11-08 00:28:33.264244265 +0000 UTC m=+0.827214014,LastTimestamp:2025-11-08 00:28:33.264244265 +0000 UTC m=+0.827214014,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562,}" Nov 8 00:28:33.283198 kubelet[2182]: I1108 00:28:33.283166 2182 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:28:33.285262 kubelet[2182]: I1108 00:28:33.285166 2182 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:28:33.285648 kubelet[2182]: I1108 00:28:33.285606 2182 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:28:33.288381 kubelet[2182]: I1108 00:28:33.287926 2182 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:28:33.293496 kubelet[2182]: E1108 00:28:33.291625 2182 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" not found" Nov 8 00:28:33.293496 kubelet[2182]: I1108 00:28:33.291685 2182 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:28:33.293496 kubelet[2182]: I1108 00:28:33.292005 2182 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:28:33.293496 kubelet[2182]: I1108 00:28:33.292075 2182 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:28:33.293496 kubelet[2182]: W1108 00:28:33.292768 2182 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.61:6443: connect: connection refused Nov 8 00:28:33.293496 kubelet[2182]: E1108 00:28:33.292844 2182 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.61:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:28:33.294887 kubelet[2182]: E1108 00:28:33.294835 2182 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562?timeout=10s\": dial tcp 10.128.0.61:6443: connect: connection refused" interval="200ms" Nov 8 00:28:33.295147 kubelet[2182]: I1108 00:28:33.295120 2182 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:28:33.295248 kubelet[2182]: I1108 00:28:33.295225 2182 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:28:33.295527 kubelet[2182]: E1108 00:28:33.295487 2182 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:28:33.296819 kubelet[2182]: I1108 00:28:33.296793 2182 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:28:33.321401 kubelet[2182]: I1108 00:28:33.321337 2182 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:28:33.324712 kubelet[2182]: I1108 00:28:33.323357 2182 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:28:33.324712 kubelet[2182]: I1108 00:28:33.323513 2182 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:28:33.324712 kubelet[2182]: I1108 00:28:33.324284 2182 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:28:33.324712 kubelet[2182]: I1108 00:28:33.324305 2182 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:28:33.325301 kubelet[2182]: E1108 00:28:33.325233 2182 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:28:33.326428 kubelet[2182]: W1108 00:28:33.326086 2182 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.61:6443: connect: connection refused Nov 8 00:28:33.326428 kubelet[2182]: E1108 00:28:33.326150 2182 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.61:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:28:33.329891 kubelet[2182]: I1108 00:28:33.329859 2182 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:28:33.329891 kubelet[2182]: I1108 00:28:33.329883 2182 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:28:33.330033 kubelet[2182]: I1108 00:28:33.329908 2182 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:28:33.333973 kubelet[2182]: I1108 00:28:33.333946 2182 policy_none.go:49] "None policy: Start" Nov 8 00:28:33.333973 kubelet[2182]: I1108 00:28:33.333976 2182 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:28:33.334092 kubelet[2182]: I1108 00:28:33.334004 2182 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:28:33.342856 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 8 00:28:33.358511 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 8 00:28:33.364181 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 8 00:28:33.375871 kubelet[2182]: I1108 00:28:33.375837 2182 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:28:33.376143 kubelet[2182]: I1108 00:28:33.376119 2182 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:28:33.376219 kubelet[2182]: I1108 00:28:33.376145 2182 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:28:33.376891 kubelet[2182]: I1108 00:28:33.376847 2182 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:28:33.378010 kubelet[2182]: E1108 00:28:33.377901 2182 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:28:33.378010 kubelet[2182]: E1108 00:28:33.377956 2182 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" not found" Nov 8 00:28:33.445872 systemd[1]: Created slice kubepods-burstable-pod55856dd496c98b47a0c96d11f7dfcc8e.slice - libcontainer container kubepods-burstable-pod55856dd496c98b47a0c96d11f7dfcc8e.slice. Nov 8 00:28:33.457714 kubelet[2182]: E1108 00:28:33.457636 2182 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" not found" node="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:33.462222 systemd[1]: Created slice kubepods-burstable-podb0f18c8171f1facf9ec339b9cf184570.slice - libcontainer container kubepods-burstable-podb0f18c8171f1facf9ec339b9cf184570.slice. Nov 8 00:28:33.474472 kubelet[2182]: E1108 00:28:33.474429 2182 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" not found" node="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:33.480962 systemd[1]: Created slice kubepods-burstable-poda7ba658e7e6f45090109c9705f3e875e.slice - libcontainer container kubepods-burstable-poda7ba658e7e6f45090109c9705f3e875e.slice. Nov 8 00:28:33.482444 kubelet[2182]: I1108 00:28:33.482380 2182 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:33.483520 kubelet[2182]: E1108 00:28:33.483470 2182 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.61:6443/api/v1/nodes\": dial tcp 10.128.0.61:6443: connect: connection refused" node="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:33.484964 kubelet[2182]: E1108 00:28:33.484930 2182 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" not found" node="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:33.493276 kubelet[2182]: I1108 00:28:33.493192 2182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b0f18c8171f1facf9ec339b9cf184570-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" (UID: \"b0f18c8171f1facf9ec339b9cf184570\") " pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:33.493276 kubelet[2182]: I1108 00:28:33.493242 2182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a7ba658e7e6f45090109c9705f3e875e-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" (UID: \"a7ba658e7e6f45090109c9705f3e875e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:33.493633 kubelet[2182]: I1108 00:28:33.493322 2182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a7ba658e7e6f45090109c9705f3e875e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" (UID: \"a7ba658e7e6f45090109c9705f3e875e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:33.493633 kubelet[2182]: I1108 00:28:33.493353 2182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55856dd496c98b47a0c96d11f7dfcc8e-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" (UID: \"55856dd496c98b47a0c96d11f7dfcc8e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:33.493633 kubelet[2182]: I1108 00:28:33.493380 2182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55856dd496c98b47a0c96d11f7dfcc8e-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" (UID: \"55856dd496c98b47a0c96d11f7dfcc8e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:33.493633 kubelet[2182]: I1108 00:28:33.493411 2182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55856dd496c98b47a0c96d11f7dfcc8e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" (UID: \"55856dd496c98b47a0c96d11f7dfcc8e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:33.493800 kubelet[2182]: I1108 00:28:33.493439 2182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a7ba658e7e6f45090109c9705f3e875e-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" (UID: \"a7ba658e7e6f45090109c9705f3e875e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:33.493800 kubelet[2182]: I1108 00:28:33.493467 2182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55856dd496c98b47a0c96d11f7dfcc8e-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" (UID: \"55856dd496c98b47a0c96d11f7dfcc8e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:33.493800 kubelet[2182]: I1108 00:28:33.493496 2182 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55856dd496c98b47a0c96d11f7dfcc8e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" (UID: \"55856dd496c98b47a0c96d11f7dfcc8e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:33.495611 kubelet[2182]: E1108 00:28:33.495564 2182 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562?timeout=10s\": dial tcp 10.128.0.61:6443: connect: connection refused" interval="400ms" Nov 8 00:28:33.688568 kubelet[2182]: I1108 00:28:33.688516 2182 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:33.689144 kubelet[2182]: E1108 00:28:33.689000 2182 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.61:6443/api/v1/nodes\": dial tcp 10.128.0.61:6443: connect: connection refused" node="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:33.759610 containerd[1463]: time="2025-11-08T00:28:33.759454087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562,Uid:55856dd496c98b47a0c96d11f7dfcc8e,Namespace:kube-system,Attempt:0,}" Nov 8 00:28:33.776408 containerd[1463]: time="2025-11-08T00:28:33.776344192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562,Uid:b0f18c8171f1facf9ec339b9cf184570,Namespace:kube-system,Attempt:0,}" Nov 8 00:28:33.786645 containerd[1463]: time="2025-11-08T00:28:33.786593403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562,Uid:a7ba658e7e6f45090109c9705f3e875e,Namespace:kube-system,Attempt:0,}" Nov 8 00:28:33.896451 kubelet[2182]: E1108 00:28:33.896383 2182 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562?timeout=10s\": dial tcp 10.128.0.61:6443: connect: connection refused" interval="800ms" Nov 8 00:28:34.095213 kubelet[2182]: I1108 00:28:34.094526 2182 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:34.095213 kubelet[2182]: E1108 00:28:34.094979 2182 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.61:6443/api/v1/nodes\": dial tcp 10.128.0.61:6443: connect: connection refused" node="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:34.094670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1153378272.mount: Deactivated successfully. Nov 8 00:28:34.107161 containerd[1463]: time="2025-11-08T00:28:34.107063378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:28:34.108707 containerd[1463]: time="2025-11-08T00:28:34.108643811Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:28:34.109995 containerd[1463]: time="2025-11-08T00:28:34.109931372Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Nov 8 00:28:34.111080 containerd[1463]: time="2025-11-08T00:28:34.111021132Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:28:34.112883 containerd[1463]: time="2025-11-08T00:28:34.112820259Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:28:34.114414 containerd[1463]: time="2025-11-08T00:28:34.114364707Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:28:34.115307 containerd[1463]: time="2025-11-08T00:28:34.115231533Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:28:34.118743 containerd[1463]: time="2025-11-08T00:28:34.117789911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:28:34.120446 containerd[1463]: time="2025-11-08T00:28:34.120399782Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 343.965061ms" Nov 8 00:28:34.122776 containerd[1463]: time="2025-11-08T00:28:34.122686566Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 363.118129ms" Nov 8 00:28:34.127173 containerd[1463]: time="2025-11-08T00:28:34.127115932Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 340.424627ms" Nov 8 00:28:34.148687 kubelet[2182]: W1108 00:28:34.148629 2182 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.61:6443: connect: connection refused Nov 8 00:28:34.148924 kubelet[2182]: E1108 00:28:34.148690 2182 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.61:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:28:34.349322 containerd[1463]: time="2025-11-08T00:28:34.347112479Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:28:34.349322 containerd[1463]: time="2025-11-08T00:28:34.347189407Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:28:34.349322 containerd[1463]: time="2025-11-08T00:28:34.347217804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:28:34.349322 containerd[1463]: time="2025-11-08T00:28:34.347352899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:28:34.357343 containerd[1463]: time="2025-11-08T00:28:34.356900654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:28:34.357343 containerd[1463]: time="2025-11-08T00:28:34.356994467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:28:34.357343 containerd[1463]: time="2025-11-08T00:28:34.357023148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:28:34.357343 containerd[1463]: time="2025-11-08T00:28:34.357163739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:28:34.361357 kubelet[2182]: W1108 00:28:34.361309 2182 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.61:6443: connect: connection refused Nov 8 00:28:34.361496 kubelet[2182]: E1108 00:28:34.361376 2182 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.61:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:28:34.369159 containerd[1463]: time="2025-11-08T00:28:34.368464678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:28:34.369159 containerd[1463]: time="2025-11-08T00:28:34.368543190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:28:34.369159 containerd[1463]: time="2025-11-08T00:28:34.368582923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:28:34.373904 containerd[1463]: time="2025-11-08T00:28:34.373788537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:28:34.411205 systemd[1]: Started cri-containerd-b88084947fe3efdd826d757262d2fd348bdb77492e4d7455e1e059a9fab41787.scope - libcontainer container b88084947fe3efdd826d757262d2fd348bdb77492e4d7455e1e059a9fab41787. Nov 8 00:28:34.418601 systemd[1]: Started cri-containerd-71eab43f30029150c1ede0df955a2bb05b172062195100e16d95a08638957467.scope - libcontainer container 71eab43f30029150c1ede0df955a2bb05b172062195100e16d95a08638957467. Nov 8 00:28:34.424379 systemd[1]: Started cri-containerd-89e70e59de9ac53e27ab9aeded1c8b2b9a4eafb7fbf71a3b8b05e141477d2370.scope - libcontainer container 89e70e59de9ac53e27ab9aeded1c8b2b9a4eafb7fbf71a3b8b05e141477d2370. Nov 8 00:28:34.439613 kubelet[2182]: W1108 00:28:34.439471 2182 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562&limit=500&resourceVersion=0": dial tcp 10.128.0.61:6443: connect: connection refused Nov 8 00:28:34.439613 kubelet[2182]: E1108 00:28:34.439564 2182 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562&limit=500&resourceVersion=0\": dial tcp 10.128.0.61:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:28:34.466369 kubelet[2182]: W1108 00:28:34.466208 2182 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.61:6443: connect: connection refused Nov 8 00:28:34.466369 kubelet[2182]: E1108 00:28:34.466311 2182 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.61:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:28:34.525147 containerd[1463]: time="2025-11-08T00:28:34.525096700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562,Uid:a7ba658e7e6f45090109c9705f3e875e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b88084947fe3efdd826d757262d2fd348bdb77492e4d7455e1e059a9fab41787\"" Nov 8 00:28:34.529275 kubelet[2182]: E1108 00:28:34.529225 2182 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9" Nov 8 00:28:34.535446 containerd[1463]: time="2025-11-08T00:28:34.535397159Z" level=info msg="CreateContainer within sandbox \"b88084947fe3efdd826d757262d2fd348bdb77492e4d7455e1e059a9fab41787\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:28:34.538665 containerd[1463]: time="2025-11-08T00:28:34.538525216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562,Uid:55856dd496c98b47a0c96d11f7dfcc8e,Namespace:kube-system,Attempt:0,} returns sandbox id \"89e70e59de9ac53e27ab9aeded1c8b2b9a4eafb7fbf71a3b8b05e141477d2370\"" Nov 8 00:28:34.541807 kubelet[2182]: E1108 00:28:34.541181 2182 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-6-nightly-20251107-2100-0d3a6" Nov 8 00:28:34.546340 containerd[1463]: time="2025-11-08T00:28:34.546276012Z" level=info msg="CreateContainer within sandbox \"89e70e59de9ac53e27ab9aeded1c8b2b9a4eafb7fbf71a3b8b05e141477d2370\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:28:34.569869 containerd[1463]: time="2025-11-08T00:28:34.569071586Z" level=info msg="CreateContainer within sandbox \"b88084947fe3efdd826d757262d2fd348bdb77492e4d7455e1e059a9fab41787\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fcb96e69d119a69d82d362bfee31827dfc90963263fec2f6ad71dc587fea39ae\"" Nov 8 00:28:34.571878 containerd[1463]: time="2025-11-08T00:28:34.571810383Z" level=info msg="StartContainer for \"fcb96e69d119a69d82d362bfee31827dfc90963263fec2f6ad71dc587fea39ae\"" Nov 8 00:28:34.575189 containerd[1463]: time="2025-11-08T00:28:34.575055405Z" level=info msg="CreateContainer within sandbox \"89e70e59de9ac53e27ab9aeded1c8b2b9a4eafb7fbf71a3b8b05e141477d2370\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7442fd28d68178ba90aa5ee7d168bc80f6d49ea034597f7d12d162461a8d2341\"" Nov 8 00:28:34.575742 containerd[1463]: time="2025-11-08T00:28:34.575679383Z" level=info msg="StartContainer for \"7442fd28d68178ba90aa5ee7d168bc80f6d49ea034597f7d12d162461a8d2341\"" Nov 8 00:28:34.581697 containerd[1463]: time="2025-11-08T00:28:34.581483897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562,Uid:b0f18c8171f1facf9ec339b9cf184570,Namespace:kube-system,Attempt:0,} returns sandbox id \"71eab43f30029150c1ede0df955a2bb05b172062195100e16d95a08638957467\"" Nov 8 00:28:34.583746 kubelet[2182]: E1108 00:28:34.583678 2182 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9" Nov 8 00:28:34.586455 containerd[1463]: time="2025-11-08T00:28:34.586216889Z" level=info msg="CreateContainer within sandbox \"71eab43f30029150c1ede0df955a2bb05b172062195100e16d95a08638957467\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:28:34.613338 containerd[1463]: time="2025-11-08T00:28:34.611343196Z" level=info msg="CreateContainer within sandbox \"71eab43f30029150c1ede0df955a2bb05b172062195100e16d95a08638957467\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ec3be41f48fe979a7dc4a97383a773c42abaa08803525577138056968781ead9\"" Nov 8 00:28:34.617637 containerd[1463]: time="2025-11-08T00:28:34.616046625Z" level=info msg="StartContainer for \"ec3be41f48fe979a7dc4a97383a773c42abaa08803525577138056968781ead9\"" Nov 8 00:28:34.639974 systemd[1]: Started cri-containerd-fcb96e69d119a69d82d362bfee31827dfc90963263fec2f6ad71dc587fea39ae.scope - libcontainer container fcb96e69d119a69d82d362bfee31827dfc90963263fec2f6ad71dc587fea39ae. Nov 8 00:28:34.660767 systemd[1]: Started cri-containerd-7442fd28d68178ba90aa5ee7d168bc80f6d49ea034597f7d12d162461a8d2341.scope - libcontainer container 7442fd28d68178ba90aa5ee7d168bc80f6d49ea034597f7d12d162461a8d2341. Nov 8 00:28:34.700510 kubelet[2182]: E1108 00:28:34.700451 2182 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562?timeout=10s\": dial tcp 10.128.0.61:6443: connect: connection refused" interval="1.6s" Nov 8 00:28:34.705094 systemd[1]: Started cri-containerd-ec3be41f48fe979a7dc4a97383a773c42abaa08803525577138056968781ead9.scope - libcontainer container ec3be41f48fe979a7dc4a97383a773c42abaa08803525577138056968781ead9. Nov 8 00:28:34.757773 containerd[1463]: time="2025-11-08T00:28:34.756666783Z" level=info msg="StartContainer for \"fcb96e69d119a69d82d362bfee31827dfc90963263fec2f6ad71dc587fea39ae\" returns successfully" Nov 8 00:28:34.795335 containerd[1463]: time="2025-11-08T00:28:34.795183407Z" level=info msg="StartContainer for \"7442fd28d68178ba90aa5ee7d168bc80f6d49ea034597f7d12d162461a8d2341\" returns successfully" Nov 8 00:28:34.830278 containerd[1463]: time="2025-11-08T00:28:34.830134200Z" level=info msg="StartContainer for \"ec3be41f48fe979a7dc4a97383a773c42abaa08803525577138056968781ead9\" returns successfully" Nov 8 00:28:34.899927 kubelet[2182]: I1108 00:28:34.899565 2182 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:35.345824 kubelet[2182]: E1108 00:28:35.345324 2182 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" not found" node="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:35.345824 kubelet[2182]: E1108 00:28:35.345324 2182 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" not found" node="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:35.352088 kubelet[2182]: E1108 00:28:35.352050 2182 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" not found" node="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:36.358907 kubelet[2182]: E1108 00:28:36.358644 2182 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" not found" node="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:36.360261 kubelet[2182]: E1108 00:28:36.359911 2182 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" not found" node="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:38.113853 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 8 00:28:38.438217 kubelet[2182]: E1108 00:28:38.438155 2182 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" not found" node="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:38.460906 kubelet[2182]: I1108 00:28:38.460858 2182 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:38.493438 kubelet[2182]: I1108 00:28:38.493388 2182 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:38.514234 kubelet[2182]: E1108 00:28:38.513613 2182 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562.1875e08a2dd1b629 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562,UID:ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562,},FirstTimestamp:2025-11-08 00:28:33.264244265 +0000 UTC m=+0.827214014,LastTimestamp:2025-11-08 00:28:33.264244265 +0000 UTC m=+0.827214014,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562,}" Nov 8 00:28:38.516269 kubelet[2182]: E1108 00:28:38.515881 2182 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:38.516269 kubelet[2182]: I1108 00:28:38.516019 2182 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:38.522092 kubelet[2182]: E1108 00:28:38.521827 2182 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:38.522092 kubelet[2182]: I1108 00:28:38.521869 2182 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:38.525096 kubelet[2182]: E1108 00:28:38.525063 2182 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:39.252707 kubelet[2182]: I1108 00:28:39.252652 2182 apiserver.go:52] "Watching apiserver" Nov 8 00:28:39.293069 kubelet[2182]: I1108 00:28:39.293016 2182 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:28:40.429344 systemd[1]: Reloading requested from client PID 2465 ('systemctl') (unit session-7.scope)... Nov 8 00:28:40.429370 systemd[1]: Reloading... Nov 8 00:28:40.573846 zram_generator::config[2508]: No configuration found. Nov 8 00:28:40.719057 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:28:40.842629 systemd[1]: Reloading finished in 412 ms. Nov 8 00:28:40.897974 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:28:40.915554 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:28:40.915911 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:28:40.916003 systemd[1]: kubelet.service: Consumed 1.357s CPU time, 133.5M memory peak, 0B memory swap peak. Nov 8 00:28:40.922308 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:28:41.234189 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:28:41.248327 (kubelet)[2553]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:28:41.326392 kubelet[2553]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:28:41.326392 kubelet[2553]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:28:41.326392 kubelet[2553]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:28:41.326996 kubelet[2553]: I1108 00:28:41.326514 2553 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:28:41.338777 kubelet[2553]: I1108 00:28:41.338293 2553 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:28:41.338777 kubelet[2553]: I1108 00:28:41.338330 2553 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:28:41.338777 kubelet[2553]: I1108 00:28:41.338645 2553 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:28:41.340208 kubelet[2553]: I1108 00:28:41.340169 2553 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 8 00:28:41.343545 kubelet[2553]: I1108 00:28:41.343315 2553 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:28:41.347219 kubelet[2553]: E1108 00:28:41.347170 2553 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:28:41.347219 kubelet[2553]: I1108 00:28:41.347222 2553 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:28:41.351573 kubelet[2553]: I1108 00:28:41.351529 2553 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:28:41.352003 kubelet[2553]: I1108 00:28:41.351952 2553 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:28:41.352247 kubelet[2553]: I1108 00:28:41.351995 2553 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:28:41.352411 kubelet[2553]: I1108 00:28:41.352253 2553 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:28:41.352411 kubelet[2553]: I1108 00:28:41.352272 2553 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:28:41.352411 kubelet[2553]: I1108 00:28:41.352337 2553 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:28:41.352583 kubelet[2553]: I1108 00:28:41.352562 2553 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:28:41.352638 kubelet[2553]: I1108 00:28:41.352609 2553 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:28:41.352638 kubelet[2553]: I1108 00:28:41.352637 2553 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:28:41.352773 kubelet[2553]: I1108 00:28:41.352665 2553 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:28:41.355808 kubelet[2553]: I1108 00:28:41.354395 2553 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:28:41.355808 kubelet[2553]: I1108 00:28:41.355066 2553 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:28:41.355808 kubelet[2553]: I1108 00:28:41.355632 2553 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:28:41.355808 kubelet[2553]: I1108 00:28:41.355678 2553 server.go:1287] "Started kubelet" Nov 8 00:28:41.363070 kubelet[2553]: I1108 00:28:41.363009 2553 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:28:41.366402 kubelet[2553]: I1108 00:28:41.366339 2553 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:28:41.366799 kubelet[2553]: I1108 00:28:41.366769 2553 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:28:41.369054 kubelet[2553]: I1108 00:28:41.369028 2553 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:28:41.374786 kubelet[2553]: I1108 00:28:41.374545 2553 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:28:41.379181 kubelet[2553]: I1108 00:28:41.379150 2553 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:28:41.382023 kubelet[2553]: I1108 00:28:41.381820 2553 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:28:41.383745 kubelet[2553]: E1108 00:28:41.383662 2553 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" not found" Nov 8 00:28:41.383841 kubelet[2553]: I1108 00:28:41.383763 2553 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:28:41.384211 kubelet[2553]: I1108 00:28:41.383919 2553 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:28:41.389733 kubelet[2553]: I1108 00:28:41.387082 2553 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:28:41.389733 kubelet[2553]: I1108 00:28:41.387208 2553 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:28:41.398049 kubelet[2553]: I1108 00:28:41.397992 2553 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:28:41.401747 kubelet[2553]: I1108 00:28:41.399894 2553 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:28:41.401747 kubelet[2553]: I1108 00:28:41.399935 2553 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:28:41.401747 kubelet[2553]: I1108 00:28:41.399962 2553 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:28:41.401747 kubelet[2553]: I1108 00:28:41.399972 2553 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:28:41.401747 kubelet[2553]: E1108 00:28:41.400040 2553 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:28:41.405500 kubelet[2553]: I1108 00:28:41.405469 2553 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:28:41.431768 kubelet[2553]: E1108 00:28:41.430889 2553 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:28:41.502798 kubelet[2553]: E1108 00:28:41.500829 2553 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 8 00:28:41.503383 kubelet[2553]: I1108 00:28:41.503353 2553 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:28:41.503383 kubelet[2553]: I1108 00:28:41.503379 2553 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:28:41.503610 kubelet[2553]: I1108 00:28:41.503411 2553 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:28:41.504216 kubelet[2553]: I1108 00:28:41.503865 2553 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:28:41.504216 kubelet[2553]: I1108 00:28:41.503893 2553 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:28:41.504216 kubelet[2553]: I1108 00:28:41.503924 2553 policy_none.go:49] "None policy: Start" Nov 8 00:28:41.504216 kubelet[2553]: I1108 00:28:41.503940 2553 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:28:41.504216 kubelet[2553]: I1108 00:28:41.503959 2553 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:28:41.504216 kubelet[2553]: I1108 00:28:41.504186 2553 state_mem.go:75] "Updated machine memory state" Nov 8 00:28:41.514476 kubelet[2553]: I1108 00:28:41.513917 2553 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:28:41.514476 kubelet[2553]: I1108 00:28:41.514139 2553 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:28:41.514476 kubelet[2553]: I1108 00:28:41.514155 2553 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:28:41.515212 kubelet[2553]: I1108 00:28:41.515166 2553 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:28:41.519524 kubelet[2553]: E1108 00:28:41.518389 2553 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:28:41.631123 kubelet[2553]: I1108 00:28:41.631088 2553 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:41.642270 kubelet[2553]: I1108 00:28:41.642217 2553 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:41.642522 kubelet[2553]: I1108 00:28:41.642508 2553 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:41.703223 kubelet[2553]: I1108 00:28:41.701761 2553 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:41.703223 kubelet[2553]: I1108 00:28:41.701780 2553 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:41.703503 kubelet[2553]: I1108 00:28:41.702012 2553 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:41.712044 kubelet[2553]: W1108 00:28:41.710612 2553 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Nov 8 00:28:41.715374 kubelet[2553]: W1108 00:28:41.715286 2553 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Nov 8 00:28:41.716587 kubelet[2553]: W1108 00:28:41.715779 2553 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Nov 8 00:28:41.786580 kubelet[2553]: I1108 00:28:41.786257 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a7ba658e7e6f45090109c9705f3e875e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" (UID: \"a7ba658e7e6f45090109c9705f3e875e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:41.786580 kubelet[2553]: I1108 00:28:41.786344 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55856dd496c98b47a0c96d11f7dfcc8e-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" (UID: \"55856dd496c98b47a0c96d11f7dfcc8e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:41.786580 kubelet[2553]: I1108 00:28:41.786383 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55856dd496c98b47a0c96d11f7dfcc8e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" (UID: \"55856dd496c98b47a0c96d11f7dfcc8e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:41.786580 kubelet[2553]: I1108 00:28:41.786434 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55856dd496c98b47a0c96d11f7dfcc8e-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" (UID: \"55856dd496c98b47a0c96d11f7dfcc8e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:41.786933 kubelet[2553]: I1108 00:28:41.786465 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a7ba658e7e6f45090109c9705f3e875e-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" (UID: \"a7ba658e7e6f45090109c9705f3e875e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:41.788343 kubelet[2553]: I1108 00:28:41.788172 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a7ba658e7e6f45090109c9705f3e875e-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" (UID: \"a7ba658e7e6f45090109c9705f3e875e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:41.788343 kubelet[2553]: I1108 00:28:41.788243 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55856dd496c98b47a0c96d11f7dfcc8e-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" (UID: \"55856dd496c98b47a0c96d11f7dfcc8e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:41.788343 kubelet[2553]: I1108 00:28:41.788297 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55856dd496c98b47a0c96d11f7dfcc8e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" (UID: \"55856dd496c98b47a0c96d11f7dfcc8e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:41.788572 kubelet[2553]: I1108 00:28:41.788371 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b0f18c8171f1facf9ec339b9cf184570-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" (UID: \"b0f18c8171f1facf9ec339b9cf184570\") " pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:42.366085 kubelet[2553]: I1108 00:28:42.366032 2553 apiserver.go:52] "Watching apiserver" Nov 8 00:28:42.384547 kubelet[2553]: I1108 00:28:42.384472 2553 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:28:42.463216 kubelet[2553]: I1108 00:28:42.463173 2553 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:42.464365 kubelet[2553]: I1108 00:28:42.464326 2553 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:42.479824 kubelet[2553]: W1108 00:28:42.477318 2553 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Nov 8 00:28:42.480024 kubelet[2553]: E1108 00:28:42.479909 2553 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:42.486292 kubelet[2553]: W1108 00:28:42.486250 2553 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Nov 8 00:28:42.486459 kubelet[2553]: E1108 00:28:42.486332 2553 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:28:42.528058 kubelet[2553]: I1108 00:28:42.527976 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" podStartSLOduration=1.527948933 podStartE2EDuration="1.527948933s" podCreationTimestamp="2025-11-08 00:28:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:28:42.515361132 +0000 UTC m=+1.259795996" watchObservedRunningTime="2025-11-08 00:28:42.527948933 +0000 UTC m=+1.272383798" Nov 8 00:28:42.544134 kubelet[2553]: I1108 00:28:42.544057 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" podStartSLOduration=1.544031221 podStartE2EDuration="1.544031221s" podCreationTimestamp="2025-11-08 00:28:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:28:42.529167098 +0000 UTC m=+1.273601965" watchObservedRunningTime="2025-11-08 00:28:42.544031221 +0000 UTC m=+1.288466086" Nov 8 00:28:42.564923 kubelet[2553]: I1108 00:28:42.563915 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" podStartSLOduration=1.563886604 podStartE2EDuration="1.563886604s" podCreationTimestamp="2025-11-08 00:28:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:28:42.544508878 +0000 UTC m=+1.288943743" watchObservedRunningTime="2025-11-08 00:28:42.563886604 +0000 UTC m=+1.308321465" Nov 8 00:28:46.710390 kubelet[2553]: I1108 00:28:46.710341 2553 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:28:46.711395 containerd[1463]: time="2025-11-08T00:28:46.711334388Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:28:46.711887 kubelet[2553]: I1108 00:28:46.711621 2553 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:28:47.339908 systemd[1]: Created slice kubepods-besteffort-pod449543da_087c_4841_843a_00a3d9f3261f.slice - libcontainer container kubepods-besteffort-pod449543da_087c_4841_843a_00a3d9f3261f.slice. Nov 8 00:28:47.425907 kubelet[2553]: I1108 00:28:47.425636 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/449543da-087c-4841-843a-00a3d9f3261f-xtables-lock\") pod \"kube-proxy-jpg7n\" (UID: \"449543da-087c-4841-843a-00a3d9f3261f\") " pod="kube-system/kube-proxy-jpg7n" Nov 8 00:28:47.425907 kubelet[2553]: I1108 00:28:47.425689 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/449543da-087c-4841-843a-00a3d9f3261f-lib-modules\") pod \"kube-proxy-jpg7n\" (UID: \"449543da-087c-4841-843a-00a3d9f3261f\") " pod="kube-system/kube-proxy-jpg7n" Nov 8 00:28:47.425907 kubelet[2553]: I1108 00:28:47.425740 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px52t\" (UniqueName: \"kubernetes.io/projected/449543da-087c-4841-843a-00a3d9f3261f-kube-api-access-px52t\") pod \"kube-proxy-jpg7n\" (UID: \"449543da-087c-4841-843a-00a3d9f3261f\") " pod="kube-system/kube-proxy-jpg7n" Nov 8 00:28:47.425907 kubelet[2553]: I1108 00:28:47.425781 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/449543da-087c-4841-843a-00a3d9f3261f-kube-proxy\") pod \"kube-proxy-jpg7n\" (UID: \"449543da-087c-4841-843a-00a3d9f3261f\") " pod="kube-system/kube-proxy-jpg7n" Nov 8 00:28:47.656348 containerd[1463]: time="2025-11-08T00:28:47.656219544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jpg7n,Uid:449543da-087c-4841-843a-00a3d9f3261f,Namespace:kube-system,Attempt:0,}" Nov 8 00:28:47.716340 containerd[1463]: time="2025-11-08T00:28:47.716182292Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:28:47.717825 containerd[1463]: time="2025-11-08T00:28:47.716783555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:28:47.717825 containerd[1463]: time="2025-11-08T00:28:47.716827919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:28:47.717825 containerd[1463]: time="2025-11-08T00:28:47.716980154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:28:47.792109 systemd[1]: Started cri-containerd-113181ac8f327b4a817a3378df8da9f9d7946615987fbec479d6316aa030924a.scope - libcontainer container 113181ac8f327b4a817a3378df8da9f9d7946615987fbec479d6316aa030924a. Nov 8 00:28:47.794523 systemd[1]: Created slice kubepods-besteffort-pod7aa677c2_468e_4bc9_bf85_7d519e6b8b69.slice - libcontainer container kubepods-besteffort-pod7aa677c2_468e_4bc9_bf85_7d519e6b8b69.slice. Nov 8 00:28:47.828687 kubelet[2553]: I1108 00:28:47.828022 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7aa677c2-468e-4bc9-bf85-7d519e6b8b69-var-lib-calico\") pod \"tigera-operator-7dcd859c48-t4cxd\" (UID: \"7aa677c2-468e-4bc9-bf85-7d519e6b8b69\") " pod="tigera-operator/tigera-operator-7dcd859c48-t4cxd" Nov 8 00:28:47.828687 kubelet[2553]: I1108 00:28:47.828080 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltf2k\" (UniqueName: \"kubernetes.io/projected/7aa677c2-468e-4bc9-bf85-7d519e6b8b69-kube-api-access-ltf2k\") pod \"tigera-operator-7dcd859c48-t4cxd\" (UID: \"7aa677c2-468e-4bc9-bf85-7d519e6b8b69\") " pod="tigera-operator/tigera-operator-7dcd859c48-t4cxd" Nov 8 00:28:47.832855 containerd[1463]: time="2025-11-08T00:28:47.832211641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jpg7n,Uid:449543da-087c-4841-843a-00a3d9f3261f,Namespace:kube-system,Attempt:0,} returns sandbox id \"113181ac8f327b4a817a3378df8da9f9d7946615987fbec479d6316aa030924a\"" Nov 8 00:28:47.838578 containerd[1463]: time="2025-11-08T00:28:47.838267004Z" level=info msg="CreateContainer within sandbox \"113181ac8f327b4a817a3378df8da9f9d7946615987fbec479d6316aa030924a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:28:47.862010 containerd[1463]: time="2025-11-08T00:28:47.861952959Z" level=info msg="CreateContainer within sandbox \"113181ac8f327b4a817a3378df8da9f9d7946615987fbec479d6316aa030924a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"248b5ac23939a6a7233f9028e7659dfb0f3b40f0584663e135e85bbc8d4f64b1\"" Nov 8 00:28:47.866588 containerd[1463]: time="2025-11-08T00:28:47.864924486Z" level=info msg="StartContainer for \"248b5ac23939a6a7233f9028e7659dfb0f3b40f0584663e135e85bbc8d4f64b1\"" Nov 8 00:28:47.905965 systemd[1]: Started cri-containerd-248b5ac23939a6a7233f9028e7659dfb0f3b40f0584663e135e85bbc8d4f64b1.scope - libcontainer container 248b5ac23939a6a7233f9028e7659dfb0f3b40f0584663e135e85bbc8d4f64b1. Nov 8 00:28:47.964026 containerd[1463]: time="2025-11-08T00:28:47.963762002Z" level=info msg="StartContainer for \"248b5ac23939a6a7233f9028e7659dfb0f3b40f0584663e135e85bbc8d4f64b1\" returns successfully" Nov 8 00:28:48.102235 containerd[1463]: time="2025-11-08T00:28:48.102180109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-t4cxd,Uid:7aa677c2-468e-4bc9-bf85-7d519e6b8b69,Namespace:tigera-operator,Attempt:0,}" Nov 8 00:28:48.148541 containerd[1463]: time="2025-11-08T00:28:48.148167220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:28:48.148541 containerd[1463]: time="2025-11-08T00:28:48.148251115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:28:48.148541 containerd[1463]: time="2025-11-08T00:28:48.148278443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:28:48.148541 containerd[1463]: time="2025-11-08T00:28:48.148425096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:28:48.187029 systemd[1]: Started cri-containerd-20625846771c9b9e6bd7fa15c0e5f82b3cd9f9645fb0ed0513ef2515a9329f41.scope - libcontainer container 20625846771c9b9e6bd7fa15c0e5f82b3cd9f9645fb0ed0513ef2515a9329f41. Nov 8 00:28:48.250032 containerd[1463]: time="2025-11-08T00:28:48.249810931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-t4cxd,Uid:7aa677c2-468e-4bc9-bf85-7d519e6b8b69,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"20625846771c9b9e6bd7fa15c0e5f82b3cd9f9645fb0ed0513ef2515a9329f41\"" Nov 8 00:28:48.255133 containerd[1463]: time="2025-11-08T00:28:48.255079890Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 8 00:28:48.551648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3835259221.mount: Deactivated successfully. Nov 8 00:28:50.164571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount16442136.mount: Deactivated successfully. Nov 8 00:28:51.096741 kubelet[2553]: I1108 00:28:51.095830 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jpg7n" podStartSLOduration=4.095802534 podStartE2EDuration="4.095802534s" podCreationTimestamp="2025-11-08 00:28:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:28:48.493809506 +0000 UTC m=+7.238244372" watchObservedRunningTime="2025-11-08 00:28:51.095802534 +0000 UTC m=+9.840237399" Nov 8 00:28:51.205243 containerd[1463]: time="2025-11-08T00:28:51.205168171Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:51.206607 containerd[1463]: time="2025-11-08T00:28:51.206431037Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 8 00:28:51.208213 containerd[1463]: time="2025-11-08T00:28:51.208033424Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:51.214362 containerd[1463]: time="2025-11-08T00:28:51.214317734Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:51.216153 containerd[1463]: time="2025-11-08T00:28:51.215709024Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.960578289s" Nov 8 00:28:51.216153 containerd[1463]: time="2025-11-08T00:28:51.215778851Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 8 00:28:51.218874 containerd[1463]: time="2025-11-08T00:28:51.218797690Z" level=info msg="CreateContainer within sandbox \"20625846771c9b9e6bd7fa15c0e5f82b3cd9f9645fb0ed0513ef2515a9329f41\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 8 00:28:51.241229 containerd[1463]: time="2025-11-08T00:28:51.241180221Z" level=info msg="CreateContainer within sandbox \"20625846771c9b9e6bd7fa15c0e5f82b3cd9f9645fb0ed0513ef2515a9329f41\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1eb6fc49519b3cd3568cb08a3dc6ea8b87fe4fe32a16eb91cb57f4ba948d97d0\"" Nov 8 00:28:51.241942 containerd[1463]: time="2025-11-08T00:28:51.241892736Z" level=info msg="StartContainer for \"1eb6fc49519b3cd3568cb08a3dc6ea8b87fe4fe32a16eb91cb57f4ba948d97d0\"" Nov 8 00:28:51.292972 systemd[1]: Started cri-containerd-1eb6fc49519b3cd3568cb08a3dc6ea8b87fe4fe32a16eb91cb57f4ba948d97d0.scope - libcontainer container 1eb6fc49519b3cd3568cb08a3dc6ea8b87fe4fe32a16eb91cb57f4ba948d97d0. Nov 8 00:28:51.330966 containerd[1463]: time="2025-11-08T00:28:51.330884743Z" level=info msg="StartContainer for \"1eb6fc49519b3cd3568cb08a3dc6ea8b87fe4fe32a16eb91cb57f4ba948d97d0\" returns successfully" Nov 8 00:28:51.515244 kubelet[2553]: I1108 00:28:51.515159 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-t4cxd" podStartSLOduration=1.54991145 podStartE2EDuration="4.515132822s" podCreationTimestamp="2025-11-08 00:28:47 +0000 UTC" firstStartedPulling="2025-11-08 00:28:48.25185295 +0000 UTC m=+6.996287802" lastFinishedPulling="2025-11-08 00:28:51.217074335 +0000 UTC m=+9.961509174" observedRunningTime="2025-11-08 00:28:51.50406799 +0000 UTC m=+10.248502853" watchObservedRunningTime="2025-11-08 00:28:51.515132822 +0000 UTC m=+10.259567690" Nov 8 00:28:52.260214 update_engine[1449]: I20251108 00:28:52.260131 1449 update_attempter.cc:509] Updating boot flags... Nov 8 00:28:52.324828 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2901) Nov 8 00:28:52.469769 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2905) Nov 8 00:28:58.601503 sudo[1709]: pam_unix(sudo:session): session closed for user root Nov 8 00:28:58.652065 sshd[1706]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:58.663219 systemd-logind[1440]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:28:58.664198 systemd[1]: sshd@6-10.128.0.61:22-139.178.89.65:41102.service: Deactivated successfully. Nov 8 00:28:58.670481 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:28:58.671056 systemd[1]: session-7.scope: Consumed 5.974s CPU time, 157.4M memory peak, 0B memory swap peak. Nov 8 00:28:58.675405 systemd-logind[1440]: Removed session 7. Nov 8 00:29:05.891880 systemd[1]: Created slice kubepods-besteffort-pod1803d672_526c_4abe_a066_58b2e033bbdc.slice - libcontainer container kubepods-besteffort-pod1803d672_526c_4abe_a066_58b2e033bbdc.slice. Nov 8 00:29:06.029669 systemd[1]: Created slice kubepods-besteffort-podf9a9449a_d202_461b_aec2_dd73a43b67bb.slice - libcontainer container kubepods-besteffort-podf9a9449a_d202_461b_aec2_dd73a43b67bb.slice. Nov 8 00:29:06.054960 kubelet[2553]: I1108 00:29:06.054504 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1803d672-526c-4abe-a066-58b2e033bbdc-tigera-ca-bundle\") pod \"calico-typha-5b6cccddc-cn62q\" (UID: \"1803d672-526c-4abe-a066-58b2e033bbdc\") " pod="calico-system/calico-typha-5b6cccddc-cn62q" Nov 8 00:29:06.055931 kubelet[2553]: I1108 00:29:06.055708 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1803d672-526c-4abe-a066-58b2e033bbdc-typha-certs\") pod \"calico-typha-5b6cccddc-cn62q\" (UID: \"1803d672-526c-4abe-a066-58b2e033bbdc\") " pod="calico-system/calico-typha-5b6cccddc-cn62q" Nov 8 00:29:06.055931 kubelet[2553]: I1108 00:29:06.055789 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kl2f\" (UniqueName: \"kubernetes.io/projected/1803d672-526c-4abe-a066-58b2e033bbdc-kube-api-access-2kl2f\") pod \"calico-typha-5b6cccddc-cn62q\" (UID: \"1803d672-526c-4abe-a066-58b2e033bbdc\") " pod="calico-system/calico-typha-5b6cccddc-cn62q" Nov 8 00:29:06.126023 kubelet[2553]: E1108 00:29:06.125926 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kn6nq" podUID="5e54b7a9-1c64-4152-ae7f-d4eec2188483" Nov 8 00:29:06.158448 kubelet[2553]: I1108 00:29:06.156815 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6hps\" (UniqueName: \"kubernetes.io/projected/5e54b7a9-1c64-4152-ae7f-d4eec2188483-kube-api-access-q6hps\") pod \"csi-node-driver-kn6nq\" (UID: \"5e54b7a9-1c64-4152-ae7f-d4eec2188483\") " pod="calico-system/csi-node-driver-kn6nq" Nov 8 00:29:06.158448 kubelet[2553]: I1108 00:29:06.156892 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e54b7a9-1c64-4152-ae7f-d4eec2188483-kubelet-dir\") pod \"csi-node-driver-kn6nq\" (UID: \"5e54b7a9-1c64-4152-ae7f-d4eec2188483\") " pod="calico-system/csi-node-driver-kn6nq" Nov 8 00:29:06.158448 kubelet[2553]: I1108 00:29:06.156922 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f9a9449a-d202-461b-aec2-dd73a43b67bb-cni-net-dir\") pod \"calico-node-98kch\" (UID: \"f9a9449a-d202-461b-aec2-dd73a43b67bb\") " pod="calico-system/calico-node-98kch" Nov 8 00:29:06.158448 kubelet[2553]: I1108 00:29:06.156951 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f9a9449a-d202-461b-aec2-dd73a43b67bb-flexvol-driver-host\") pod \"calico-node-98kch\" (UID: \"f9a9449a-d202-461b-aec2-dd73a43b67bb\") " pod="calico-system/calico-node-98kch" Nov 8 00:29:06.158448 kubelet[2553]: I1108 00:29:06.156980 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9a9449a-d202-461b-aec2-dd73a43b67bb-tigera-ca-bundle\") pod \"calico-node-98kch\" (UID: \"f9a9449a-d202-461b-aec2-dd73a43b67bb\") " pod="calico-system/calico-node-98kch" Nov 8 00:29:06.159021 kubelet[2553]: I1108 00:29:06.157006 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5e54b7a9-1c64-4152-ae7f-d4eec2188483-socket-dir\") pod \"csi-node-driver-kn6nq\" (UID: \"5e54b7a9-1c64-4152-ae7f-d4eec2188483\") " pod="calico-system/csi-node-driver-kn6nq" Nov 8 00:29:06.159021 kubelet[2553]: I1108 00:29:06.157043 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9a9449a-d202-461b-aec2-dd73a43b67bb-lib-modules\") pod \"calico-node-98kch\" (UID: \"f9a9449a-d202-461b-aec2-dd73a43b67bb\") " pod="calico-system/calico-node-98kch" Nov 8 00:29:06.159021 kubelet[2553]: I1108 00:29:06.157076 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f9a9449a-d202-461b-aec2-dd73a43b67bb-var-run-calico\") pod \"calico-node-98kch\" (UID: \"f9a9449a-d202-461b-aec2-dd73a43b67bb\") " pod="calico-system/calico-node-98kch" Nov 8 00:29:06.159021 kubelet[2553]: I1108 00:29:06.157102 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f9a9449a-d202-461b-aec2-dd73a43b67bb-cni-bin-dir\") pod \"calico-node-98kch\" (UID: \"f9a9449a-d202-461b-aec2-dd73a43b67bb\") " pod="calico-system/calico-node-98kch" Nov 8 00:29:06.159021 kubelet[2553]: I1108 00:29:06.157129 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f9a9449a-d202-461b-aec2-dd73a43b67bb-node-certs\") pod \"calico-node-98kch\" (UID: \"f9a9449a-d202-461b-aec2-dd73a43b67bb\") " pod="calico-system/calico-node-98kch" Nov 8 00:29:06.159291 kubelet[2553]: I1108 00:29:06.157151 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f9a9449a-d202-461b-aec2-dd73a43b67bb-policysync\") pod \"calico-node-98kch\" (UID: \"f9a9449a-d202-461b-aec2-dd73a43b67bb\") " pod="calico-system/calico-node-98kch" Nov 8 00:29:06.159291 kubelet[2553]: I1108 00:29:06.157175 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f9a9449a-d202-461b-aec2-dd73a43b67bb-var-lib-calico\") pod \"calico-node-98kch\" (UID: \"f9a9449a-d202-461b-aec2-dd73a43b67bb\") " pod="calico-system/calico-node-98kch" Nov 8 00:29:06.159291 kubelet[2553]: I1108 00:29:06.157201 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9a9449a-d202-461b-aec2-dd73a43b67bb-xtables-lock\") pod \"calico-node-98kch\" (UID: \"f9a9449a-d202-461b-aec2-dd73a43b67bb\") " pod="calico-system/calico-node-98kch" Nov 8 00:29:06.159291 kubelet[2553]: I1108 00:29:06.157228 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f9a9449a-d202-461b-aec2-dd73a43b67bb-cni-log-dir\") pod \"calico-node-98kch\" (UID: \"f9a9449a-d202-461b-aec2-dd73a43b67bb\") " pod="calico-system/calico-node-98kch" Nov 8 00:29:06.159291 kubelet[2553]: I1108 00:29:06.157259 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4qqj\" (UniqueName: \"kubernetes.io/projected/f9a9449a-d202-461b-aec2-dd73a43b67bb-kube-api-access-b4qqj\") pod \"calico-node-98kch\" (UID: \"f9a9449a-d202-461b-aec2-dd73a43b67bb\") " pod="calico-system/calico-node-98kch" Nov 8 00:29:06.159633 kubelet[2553]: I1108 00:29:06.157289 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5e54b7a9-1c64-4152-ae7f-d4eec2188483-registration-dir\") pod \"csi-node-driver-kn6nq\" (UID: \"5e54b7a9-1c64-4152-ae7f-d4eec2188483\") " pod="calico-system/csi-node-driver-kn6nq" Nov 8 00:29:06.159633 kubelet[2553]: I1108 00:29:06.157314 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5e54b7a9-1c64-4152-ae7f-d4eec2188483-varrun\") pod \"csi-node-driver-kn6nq\" (UID: \"5e54b7a9-1c64-4152-ae7f-d4eec2188483\") " pod="calico-system/csi-node-driver-kn6nq" Nov 8 00:29:06.262111 kubelet[2553]: E1108 00:29:06.262047 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:06.263040 kubelet[2553]: W1108 00:29:06.262816 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:06.263040 kubelet[2553]: E1108 00:29:06.262883 2553 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:06.264941 kubelet[2553]: E1108 00:29:06.263712 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:06.264941 kubelet[2553]: W1108 00:29:06.263750 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:06.264941 kubelet[2553]: E1108 00:29:06.264773 2553 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:06.265687 kubelet[2553]: E1108 00:29:06.265424 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:06.265687 kubelet[2553]: W1108 00:29:06.265444 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:06.265687 kubelet[2553]: E1108 00:29:06.265541 2553 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:06.266375 kubelet[2553]: E1108 00:29:06.266144 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:06.266375 kubelet[2553]: W1108 00:29:06.266164 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:06.266375 kubelet[2553]: E1108 00:29:06.266255 2553 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:06.266914 kubelet[2553]: E1108 00:29:06.266860 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:06.266914 kubelet[2553]: W1108 00:29:06.266879 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:06.269295 kubelet[2553]: E1108 00:29:06.269109 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:06.269295 kubelet[2553]: W1108 00:29:06.269127 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:06.269477 kubelet[2553]: E1108 00:29:06.269456 2553 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:06.269596 kubelet[2553]: E1108 00:29:06.269570 2553 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:06.269911 kubelet[2553]: E1108 00:29:06.269801 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:06.269911 kubelet[2553]: W1108 00:29:06.269820 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:06.270245 kubelet[2553]: E1108 00:29:06.270117 2553 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:06.270461 kubelet[2553]: E1108 00:29:06.270445 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:06.270650 kubelet[2553]: W1108 00:29:06.270551 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:06.270893 kubelet[2553]: E1108 00:29:06.270775 2553 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:06.271252 kubelet[2553]: E1108 00:29:06.271232 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:06.273744 kubelet[2553]: W1108 00:29:06.271351 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:06.273877 kubelet[2553]: E1108 00:29:06.273853 2553 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:06.274226 kubelet[2553]: E1108 00:29:06.274210 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:06.274415 kubelet[2553]: W1108 00:29:06.274313 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:06.274746 kubelet[2553]: E1108 00:29:06.274550 2553 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:06.275330 kubelet[2553]: E1108 00:29:06.275311 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:06.275442 kubelet[2553]: W1108 00:29:06.275426 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:06.278568 kubelet[2553]: E1108 00:29:06.278406 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:06.278568 kubelet[2553]: W1108 00:29:06.278428 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:06.278827 kubelet[2553]: E1108 00:29:06.278802 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:06.278910 kubelet[2553]: W1108 00:29:06.278828 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:06.280760 kubelet[2553]: E1108 00:29:06.279136 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:06.280760 kubelet[2553]: W1108 00:29:06.279156 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:06.280760 kubelet[2553]: E1108 00:29:06.279632 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:06.280760 kubelet[2553]: W1108 00:29:06.279647 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:06.280760 kubelet[2553]: E1108 00:29:06.279665 2553 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:06.281070 kubelet[2553]: E1108 00:29:06.281059 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:06.281137 kubelet[2553]: W1108 00:29:06.281075 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:06.281137 kubelet[2553]: E1108 00:29:06.281093 2553 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:06.282498 kubelet[2553]: E1108 00:29:06.281446 2553 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:06.282498 kubelet[2553]: E1108 00:29:06.281480 2553 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:06.282498 kubelet[2553]: E1108 00:29:06.281493 2553 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:06.282498 kubelet[2553]: E1108 00:29:06.281529 2553 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:06.282498 kubelet[2553]: E1108 00:29:06.281585 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:06.282498 kubelet[2553]: W1108 00:29:06.281596 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:06.282498 kubelet[2553]: E1108 00:29:06.281613 2553 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:06.289098 kubelet[2553]: E1108 00:29:06.289071 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:06.289241 kubelet[2553]: W1108 00:29:06.289223 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:06.289340 kubelet[2553]: E1108 00:29:06.289325 2553 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:06.289901 kubelet[2553]: E1108 00:29:06.289879 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:06.290039 kubelet[2553]: W1108 00:29:06.290023 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:06.290138 kubelet[2553]: E1108 00:29:06.290124 2553 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:06.292836 kubelet[2553]: E1108 00:29:06.292803 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:06.292978 kubelet[2553]: W1108 00:29:06.292962 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:06.293099 kubelet[2553]: E1108 00:29:06.293084 2553 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:06.293654 kubelet[2553]: E1108 00:29:06.293634 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:06.295336 kubelet[2553]: W1108 00:29:06.293810 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:06.299475 kubelet[2553]: E1108 00:29:06.298290 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:06.299475 kubelet[2553]: W1108 00:29:06.298330 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:06.302590 kubelet[2553]: E1108 00:29:06.302368 2553 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:06.302590 kubelet[2553]: E1108 00:29:06.302530 2553 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:06.303266 kubelet[2553]: E1108 00:29:06.303058 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:06.303266 kubelet[2553]: W1108 00:29:06.303083 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:06.303266 kubelet[2553]: E1108 00:29:06.303205 2553 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:06.305038 kubelet[2553]: E1108 00:29:06.304756 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:06.305038 kubelet[2553]: W1108 00:29:06.304782 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:06.307118 kubelet[2553]: E1108 00:29:06.306344 2553 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:06.307497 kubelet[2553]: E1108 00:29:06.307477 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:06.307691 kubelet[2553]: W1108 00:29:06.307630 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:06.309091 kubelet[2553]: E1108 00:29:06.309057 2553 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:06.310123 kubelet[2553]: E1108 00:29:06.309958 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:06.310123 kubelet[2553]: W1108 00:29:06.310095 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:06.311371 kubelet[2553]: E1108 00:29:06.310486 2553 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:06.312388 kubelet[2553]: E1108 00:29:06.312357 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:06.312510 kubelet[2553]: W1108 00:29:06.312493 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:06.312633 kubelet[2553]: E1108 00:29:06.312617 2553 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:06.322527 kubelet[2553]: E1108 00:29:06.322486 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:06.324801 kubelet[2553]: W1108 00:29:06.324771 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:06.325011 kubelet[2553]: E1108 00:29:06.324984 2553 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:06.349136 containerd[1463]: time="2025-11-08T00:29:06.348662126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-98kch,Uid:f9a9449a-d202-461b-aec2-dd73a43b67bb,Namespace:calico-system,Attempt:0,}" Nov 8 00:29:06.394757 containerd[1463]: time="2025-11-08T00:29:06.394569862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:06.396231 containerd[1463]: time="2025-11-08T00:29:06.395709179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:06.396231 containerd[1463]: time="2025-11-08T00:29:06.395756400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:06.396231 containerd[1463]: time="2025-11-08T00:29:06.395888754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:06.429983 systemd[1]: Started cri-containerd-558df16444074dd3071fc9431cc0dbc588884fbe7d01c165e8ffe0d9cfdba79b.scope - libcontainer container 558df16444074dd3071fc9431cc0dbc588884fbe7d01c165e8ffe0d9cfdba79b. Nov 8 00:29:06.484279 containerd[1463]: time="2025-11-08T00:29:06.484221339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-98kch,Uid:f9a9449a-d202-461b-aec2-dd73a43b67bb,Namespace:calico-system,Attempt:0,} returns sandbox id \"558df16444074dd3071fc9431cc0dbc588884fbe7d01c165e8ffe0d9cfdba79b\"" Nov 8 00:29:06.488918 containerd[1463]: time="2025-11-08T00:29:06.487711244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 00:29:06.497375 containerd[1463]: time="2025-11-08T00:29:06.496652353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b6cccddc-cn62q,Uid:1803d672-526c-4abe-a066-58b2e033bbdc,Namespace:calico-system,Attempt:0,}" Nov 8 00:29:06.545145 containerd[1463]: time="2025-11-08T00:29:06.545023574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:06.545496 containerd[1463]: time="2025-11-08T00:29:06.545446125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:06.545836 containerd[1463]: time="2025-11-08T00:29:06.545786829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:06.556755 containerd[1463]: time="2025-11-08T00:29:06.546230752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:06.591002 systemd[1]: Started cri-containerd-261dad0fefd1bb23658fb2fb28a390f547c3523825b3b3b1f46101f2339c4d44.scope - libcontainer container 261dad0fefd1bb23658fb2fb28a390f547c3523825b3b3b1f46101f2339c4d44. Nov 8 00:29:06.678792 containerd[1463]: time="2025-11-08T00:29:06.678539080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b6cccddc-cn62q,Uid:1803d672-526c-4abe-a066-58b2e033bbdc,Namespace:calico-system,Attempt:0,} returns sandbox id \"261dad0fefd1bb23658fb2fb28a390f547c3523825b3b3b1f46101f2339c4d44\"" Nov 8 00:29:07.424481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount692726806.mount: Deactivated successfully. Nov 8 00:29:07.573281 containerd[1463]: time="2025-11-08T00:29:07.573211813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:07.574951 containerd[1463]: time="2025-11-08T00:29:07.574736707Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Nov 8 00:29:07.576986 containerd[1463]: time="2025-11-08T00:29:07.576529150Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:07.580264 containerd[1463]: time="2025-11-08T00:29:07.580213481Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:07.581828 containerd[1463]: time="2025-11-08T00:29:07.581775511Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.093835365s" Nov 8 00:29:07.581993 containerd[1463]: time="2025-11-08T00:29:07.581961354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 8 00:29:07.584002 containerd[1463]: time="2025-11-08T00:29:07.583905254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 8 00:29:07.587269 containerd[1463]: time="2025-11-08T00:29:07.587224202Z" level=info msg="CreateContainer within sandbox \"558df16444074dd3071fc9431cc0dbc588884fbe7d01c165e8ffe0d9cfdba79b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 00:29:07.609392 containerd[1463]: time="2025-11-08T00:29:07.609334489Z" level=info msg="CreateContainer within sandbox \"558df16444074dd3071fc9431cc0dbc588884fbe7d01c165e8ffe0d9cfdba79b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4032fb7bdbd90c8e8dfac391c240d2fd48f26b91778aa2fb9db16df9856d0c9d\"" Nov 8 00:29:07.610788 containerd[1463]: time="2025-11-08T00:29:07.610461276Z" level=info msg="StartContainer for \"4032fb7bdbd90c8e8dfac391c240d2fd48f26b91778aa2fb9db16df9856d0c9d\"" Nov 8 00:29:07.661054 systemd[1]: Started cri-containerd-4032fb7bdbd90c8e8dfac391c240d2fd48f26b91778aa2fb9db16df9856d0c9d.scope - libcontainer container 4032fb7bdbd90c8e8dfac391c240d2fd48f26b91778aa2fb9db16df9856d0c9d. Nov 8 00:29:07.709812 containerd[1463]: time="2025-11-08T00:29:07.709398196Z" level=info msg="StartContainer for \"4032fb7bdbd90c8e8dfac391c240d2fd48f26b91778aa2fb9db16df9856d0c9d\" returns successfully" Nov 8 00:29:07.732252 systemd[1]: cri-containerd-4032fb7bdbd90c8e8dfac391c240d2fd48f26b91778aa2fb9db16df9856d0c9d.scope: Deactivated successfully. Nov 8 00:29:08.123733 containerd[1463]: time="2025-11-08T00:29:08.123303277Z" level=info msg="shim disconnected" id=4032fb7bdbd90c8e8dfac391c240d2fd48f26b91778aa2fb9db16df9856d0c9d namespace=k8s.io Nov 8 00:29:08.123733 containerd[1463]: time="2025-11-08T00:29:08.123378732Z" level=warning msg="cleaning up after shim disconnected" id=4032fb7bdbd90c8e8dfac391c240d2fd48f26b91778aa2fb9db16df9856d0c9d namespace=k8s.io Nov 8 00:29:08.123733 containerd[1463]: time="2025-11-08T00:29:08.123394069Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:29:08.401210 kubelet[2553]: E1108 00:29:08.401144 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kn6nq" podUID="5e54b7a9-1c64-4152-ae7f-d4eec2188483" Nov 8 00:29:09.729615 containerd[1463]: time="2025-11-08T00:29:09.729537707Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:09.731237 containerd[1463]: time="2025-11-08T00:29:09.731019343Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33739890" Nov 8 00:29:09.734751 containerd[1463]: time="2025-11-08T00:29:09.732562910Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:09.736435 containerd[1463]: time="2025-11-08T00:29:09.736386186Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:09.737391 containerd[1463]: time="2025-11-08T00:29:09.737347054Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.153402165s" Nov 8 00:29:09.737561 containerd[1463]: time="2025-11-08T00:29:09.737534518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 8 00:29:09.739568 containerd[1463]: time="2025-11-08T00:29:09.739526983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 00:29:09.758837 containerd[1463]: time="2025-11-08T00:29:09.758558989Z" level=info msg="CreateContainer within sandbox \"261dad0fefd1bb23658fb2fb28a390f547c3523825b3b3b1f46101f2339c4d44\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 8 00:29:09.780915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1166640114.mount: Deactivated successfully. Nov 8 00:29:09.784392 containerd[1463]: time="2025-11-08T00:29:09.783559871Z" level=info msg="CreateContainer within sandbox \"261dad0fefd1bb23658fb2fb28a390f547c3523825b3b3b1f46101f2339c4d44\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"da5faf1cd4e679311e9afae47eefc51e8980076fc0f4679559bee7780345f2a9\"" Nov 8 00:29:09.786168 containerd[1463]: time="2025-11-08T00:29:09.786124736Z" level=info msg="StartContainer for \"da5faf1cd4e679311e9afae47eefc51e8980076fc0f4679559bee7780345f2a9\"" Nov 8 00:29:09.836066 systemd[1]: Started cri-containerd-da5faf1cd4e679311e9afae47eefc51e8980076fc0f4679559bee7780345f2a9.scope - libcontainer container da5faf1cd4e679311e9afae47eefc51e8980076fc0f4679559bee7780345f2a9. Nov 8 00:29:09.904268 containerd[1463]: time="2025-11-08T00:29:09.904134203Z" level=info msg="StartContainer for \"da5faf1cd4e679311e9afae47eefc51e8980076fc0f4679559bee7780345f2a9\" returns successfully" Nov 8 00:29:10.402754 kubelet[2553]: E1108 00:29:10.401287 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kn6nq" podUID="5e54b7a9-1c64-4152-ae7f-d4eec2188483" Nov 8 00:29:10.617859 kubelet[2553]: I1108 00:29:10.616810 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5b6cccddc-cn62q" podStartSLOduration=2.559802049 podStartE2EDuration="5.616785528s" podCreationTimestamp="2025-11-08 00:29:05 +0000 UTC" firstStartedPulling="2025-11-08 00:29:06.681654031 +0000 UTC m=+25.426088889" lastFinishedPulling="2025-11-08 00:29:09.738637516 +0000 UTC m=+28.483072368" observedRunningTime="2025-11-08 00:29:10.614143358 +0000 UTC m=+29.358578233" watchObservedRunningTime="2025-11-08 00:29:10.616785528 +0000 UTC m=+29.361220392" Nov 8 00:29:11.597326 kubelet[2553]: I1108 00:29:11.597255 2553 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:29:12.401142 kubelet[2553]: E1108 00:29:12.401066 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kn6nq" podUID="5e54b7a9-1c64-4152-ae7f-d4eec2188483" Nov 8 00:29:13.085208 containerd[1463]: time="2025-11-08T00:29:13.085129116Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:13.086891 containerd[1463]: time="2025-11-08T00:29:13.086687253Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 8 00:29:13.089746 containerd[1463]: time="2025-11-08T00:29:13.088327686Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:13.092222 containerd[1463]: time="2025-11-08T00:29:13.092178769Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:13.093508 containerd[1463]: time="2025-11-08T00:29:13.093459527Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.353875589s" Nov 8 00:29:13.093621 containerd[1463]: time="2025-11-08T00:29:13.093525457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 8 00:29:13.097544 containerd[1463]: time="2025-11-08T00:29:13.097506505Z" level=info msg="CreateContainer within sandbox \"558df16444074dd3071fc9431cc0dbc588884fbe7d01c165e8ffe0d9cfdba79b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 00:29:13.121573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount864013759.mount: Deactivated successfully. Nov 8 00:29:13.123405 containerd[1463]: time="2025-11-08T00:29:13.123338958Z" level=info msg="CreateContainer within sandbox \"558df16444074dd3071fc9431cc0dbc588884fbe7d01c165e8ffe0d9cfdba79b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b3da4a5f95c0f9d4ba0a8c2269f9ff1779e27076a274deb1740fd9bb80fc40db\"" Nov 8 00:29:13.125627 containerd[1463]: time="2025-11-08T00:29:13.125570805Z" level=info msg="StartContainer for \"b3da4a5f95c0f9d4ba0a8c2269f9ff1779e27076a274deb1740fd9bb80fc40db\"" Nov 8 00:29:13.173985 systemd[1]: Started cri-containerd-b3da4a5f95c0f9d4ba0a8c2269f9ff1779e27076a274deb1740fd9bb80fc40db.scope - libcontainer container b3da4a5f95c0f9d4ba0a8c2269f9ff1779e27076a274deb1740fd9bb80fc40db. Nov 8 00:29:13.213932 containerd[1463]: time="2025-11-08T00:29:13.213713223Z" level=info msg="StartContainer for \"b3da4a5f95c0f9d4ba0a8c2269f9ff1779e27076a274deb1740fd9bb80fc40db\" returns successfully" Nov 8 00:29:14.275564 containerd[1463]: time="2025-11-08T00:29:14.275498004Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:29:14.279474 systemd[1]: cri-containerd-b3da4a5f95c0f9d4ba0a8c2269f9ff1779e27076a274deb1740fd9bb80fc40db.scope: Deactivated successfully. Nov 8 00:29:14.321585 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3da4a5f95c0f9d4ba0a8c2269f9ff1779e27076a274deb1740fd9bb80fc40db-rootfs.mount: Deactivated successfully. Nov 8 00:29:14.329684 kubelet[2553]: I1108 00:29:14.329629 2553 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 8 00:29:14.385528 systemd[1]: Created slice kubepods-burstable-podf25962ab_cc66_4ae1_b3d0_2209da78cffc.slice - libcontainer container kubepods-burstable-podf25962ab_cc66_4ae1_b3d0_2209da78cffc.slice. Nov 8 00:29:14.435753 kubelet[2553]: I1108 00:29:14.433036 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpvzx\" (UniqueName: \"kubernetes.io/projected/f4df0bd6-9275-4ffb-bc86-7dbb94791082-kube-api-access-jpvzx\") pod \"calico-apiserver-969f74cdf-w6w2r\" (UID: \"f4df0bd6-9275-4ffb-bc86-7dbb94791082\") " pod="calico-apiserver/calico-apiserver-969f74cdf-w6w2r" Nov 8 00:29:14.435753 kubelet[2553]: I1108 00:29:14.433096 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mb96\" (UniqueName: \"kubernetes.io/projected/e59ce4b6-f87c-444d-abb1-31c4a685274a-kube-api-access-2mb96\") pod \"calico-kube-controllers-66784f75f9-twnqc\" (UID: \"e59ce4b6-f87c-444d-abb1-31c4a685274a\") " pod="calico-system/calico-kube-controllers-66784f75f9-twnqc" Nov 8 00:29:14.435753 kubelet[2553]: I1108 00:29:14.433132 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/418020d8-f003-4c3d-bcf6-3368810f5d40-goldmane-ca-bundle\") pod \"goldmane-666569f655-b28gg\" (UID: \"418020d8-f003-4c3d-bcf6-3368810f5d40\") " pod="calico-system/goldmane-666569f655-b28gg" Nov 8 00:29:14.435753 kubelet[2553]: I1108 00:29:14.433167 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzv5n\" (UniqueName: \"kubernetes.io/projected/418020d8-f003-4c3d-bcf6-3368810f5d40-kube-api-access-jzv5n\") pod \"goldmane-666569f655-b28gg\" (UID: \"418020d8-f003-4c3d-bcf6-3368810f5d40\") " pod="calico-system/goldmane-666569f655-b28gg" Nov 8 00:29:14.435753 kubelet[2553]: I1108 00:29:14.433202 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt25g\" (UniqueName: \"kubernetes.io/projected/56d5b52f-30bd-4997-b2d5-968bfbb3b185-kube-api-access-qt25g\") pod \"whisker-66954587fc-lk9h4\" (UID: \"56d5b52f-30bd-4997-b2d5-968bfbb3b185\") " pod="calico-system/whisker-66954587fc-lk9h4" Nov 8 00:29:14.436146 kubelet[2553]: I1108 00:29:14.433230 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f87c08c-b49d-4506-b03a-99a6bbfdb418-config-volume\") pod \"coredns-668d6bf9bc-zfhqj\" (UID: \"3f87c08c-b49d-4506-b03a-99a6bbfdb418\") " pod="kube-system/coredns-668d6bf9bc-zfhqj" Nov 8 00:29:14.436146 kubelet[2553]: I1108 00:29:14.433261 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/418020d8-f003-4c3d-bcf6-3368810f5d40-goldmane-key-pair\") pod \"goldmane-666569f655-b28gg\" (UID: \"418020d8-f003-4c3d-bcf6-3368810f5d40\") " pod="calico-system/goldmane-666569f655-b28gg" Nov 8 00:29:14.436146 kubelet[2553]: I1108 00:29:14.433306 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56d5b52f-30bd-4997-b2d5-968bfbb3b185-whisker-ca-bundle\") pod \"whisker-66954587fc-lk9h4\" (UID: \"56d5b52f-30bd-4997-b2d5-968bfbb3b185\") " pod="calico-system/whisker-66954587fc-lk9h4" Nov 8 00:29:14.436146 kubelet[2553]: I1108 00:29:14.433343 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f4df0bd6-9275-4ffb-bc86-7dbb94791082-calico-apiserver-certs\") pod \"calico-apiserver-969f74cdf-w6w2r\" (UID: \"f4df0bd6-9275-4ffb-bc86-7dbb94791082\") " pod="calico-apiserver/calico-apiserver-969f74cdf-w6w2r" Nov 8 00:29:14.436146 kubelet[2553]: I1108 00:29:14.433374 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/06ee44fb-81f0-4173-813e-506c57500250-calico-apiserver-certs\") pod \"calico-apiserver-969f74cdf-2lhrt\" (UID: \"06ee44fb-81f0-4173-813e-506c57500250\") " pod="calico-apiserver/calico-apiserver-969f74cdf-2lhrt" Nov 8 00:29:14.436416 kubelet[2553]: I1108 00:29:14.433403 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xcp7\" (UniqueName: \"kubernetes.io/projected/06ee44fb-81f0-4173-813e-506c57500250-kube-api-access-7xcp7\") pod \"calico-apiserver-969f74cdf-2lhrt\" (UID: \"06ee44fb-81f0-4173-813e-506c57500250\") " pod="calico-apiserver/calico-apiserver-969f74cdf-2lhrt" Nov 8 00:29:14.436416 kubelet[2553]: I1108 00:29:14.433434 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e59ce4b6-f87c-444d-abb1-31c4a685274a-tigera-ca-bundle\") pod \"calico-kube-controllers-66784f75f9-twnqc\" (UID: \"e59ce4b6-f87c-444d-abb1-31c4a685274a\") " pod="calico-system/calico-kube-controllers-66784f75f9-twnqc" Nov 8 00:29:14.436416 kubelet[2553]: I1108 00:29:14.433469 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/56d5b52f-30bd-4997-b2d5-968bfbb3b185-whisker-backend-key-pair\") pod \"whisker-66954587fc-lk9h4\" (UID: \"56d5b52f-30bd-4997-b2d5-968bfbb3b185\") " pod="calico-system/whisker-66954587fc-lk9h4" Nov 8 00:29:14.436416 kubelet[2553]: I1108 00:29:14.433498 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ldfw\" (UniqueName: \"kubernetes.io/projected/3f87c08c-b49d-4506-b03a-99a6bbfdb418-kube-api-access-7ldfw\") pod \"coredns-668d6bf9bc-zfhqj\" (UID: \"3f87c08c-b49d-4506-b03a-99a6bbfdb418\") " pod="kube-system/coredns-668d6bf9bc-zfhqj" Nov 8 00:29:14.436416 kubelet[2553]: I1108 00:29:14.433526 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/418020d8-f003-4c3d-bcf6-3368810f5d40-config\") pod \"goldmane-666569f655-b28gg\" (UID: \"418020d8-f003-4c3d-bcf6-3368810f5d40\") " pod="calico-system/goldmane-666569f655-b28gg" Nov 8 00:29:14.436757 kubelet[2553]: I1108 00:29:14.433557 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f25962ab-cc66-4ae1-b3d0-2209da78cffc-config-volume\") pod \"coredns-668d6bf9bc-vbsgj\" (UID: \"f25962ab-cc66-4ae1-b3d0-2209da78cffc\") " pod="kube-system/coredns-668d6bf9bc-vbsgj" Nov 8 00:29:14.436757 kubelet[2553]: I1108 00:29:14.433586 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58mzr\" (UniqueName: \"kubernetes.io/projected/f25962ab-cc66-4ae1-b3d0-2209da78cffc-kube-api-access-58mzr\") pod \"coredns-668d6bf9bc-vbsgj\" (UID: \"f25962ab-cc66-4ae1-b3d0-2209da78cffc\") " pod="kube-system/coredns-668d6bf9bc-vbsgj" Nov 8 00:29:14.438534 systemd[1]: Created slice kubepods-besteffort-pod5e54b7a9_1c64_4152_ae7f_d4eec2188483.slice - libcontainer container kubepods-besteffort-pod5e54b7a9_1c64_4152_ae7f_d4eec2188483.slice. Nov 8 00:29:14.450281 containerd[1463]: time="2025-11-08T00:29:14.450211560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kn6nq,Uid:5e54b7a9-1c64-4152-ae7f-d4eec2188483,Namespace:calico-system,Attempt:0,}" Nov 8 00:29:14.457930 systemd[1]: Created slice kubepods-burstable-pod3f87c08c_b49d_4506_b03a_99a6bbfdb418.slice - libcontainer container kubepods-burstable-pod3f87c08c_b49d_4506_b03a_99a6bbfdb418.slice. Nov 8 00:29:14.474396 systemd[1]: Created slice kubepods-besteffort-pode59ce4b6_f87c_444d_abb1_31c4a685274a.slice - libcontainer container kubepods-besteffort-pode59ce4b6_f87c_444d_abb1_31c4a685274a.slice. Nov 8 00:29:14.492107 systemd[1]: Created slice kubepods-besteffort-pod56d5b52f_30bd_4997_b2d5_968bfbb3b185.slice - libcontainer container kubepods-besteffort-pod56d5b52f_30bd_4997_b2d5_968bfbb3b185.slice. Nov 8 00:29:14.508405 systemd[1]: Created slice kubepods-besteffort-pod418020d8_f003_4c3d_bcf6_3368810f5d40.slice - libcontainer container kubepods-besteffort-pod418020d8_f003_4c3d_bcf6_3368810f5d40.slice. Nov 8 00:29:14.523228 systemd[1]: Created slice kubepods-besteffort-pod06ee44fb_81f0_4173_813e_506c57500250.slice - libcontainer container kubepods-besteffort-pod06ee44fb_81f0_4173_813e_506c57500250.slice. Nov 8 00:29:14.533440 systemd[1]: Created slice kubepods-besteffort-podf4df0bd6_9275_4ffb_bc86_7dbb94791082.slice - libcontainer container kubepods-besteffort-podf4df0bd6_9275_4ffb_bc86_7dbb94791082.slice. Nov 8 00:29:14.693602 containerd[1463]: time="2025-11-08T00:29:14.693537492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vbsgj,Uid:f25962ab-cc66-4ae1-b3d0-2209da78cffc,Namespace:kube-system,Attempt:0,}" Nov 8 00:29:14.766522 containerd[1463]: time="2025-11-08T00:29:14.766458592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zfhqj,Uid:3f87c08c-b49d-4506-b03a-99a6bbfdb418,Namespace:kube-system,Attempt:0,}" Nov 8 00:29:14.866138 containerd[1463]: time="2025-11-08T00:29:14.865807159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66784f75f9-twnqc,Uid:e59ce4b6-f87c-444d-abb1-31c4a685274a,Namespace:calico-system,Attempt:0,}" Nov 8 00:29:14.870915 containerd[1463]: time="2025-11-08T00:29:14.870859259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66954587fc-lk9h4,Uid:56d5b52f-30bd-4997-b2d5-968bfbb3b185,Namespace:calico-system,Attempt:0,}" Nov 8 00:29:14.877809 containerd[1463]: time="2025-11-08T00:29:14.877733682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-969f74cdf-w6w2r,Uid:f4df0bd6-9275-4ffb-bc86-7dbb94791082,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:29:14.882623 containerd[1463]: time="2025-11-08T00:29:14.882576156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-b28gg,Uid:418020d8-f003-4c3d-bcf6-3368810f5d40,Namespace:calico-system,Attempt:0,}" Nov 8 00:29:14.891055 containerd[1463]: time="2025-11-08T00:29:14.890913917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-969f74cdf-2lhrt,Uid:06ee44fb-81f0-4173-813e-506c57500250,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:29:15.201763 containerd[1463]: time="2025-11-08T00:29:15.201569632Z" level=info msg="shim disconnected" id=b3da4a5f95c0f9d4ba0a8c2269f9ff1779e27076a274deb1740fd9bb80fc40db namespace=k8s.io Nov 8 00:29:15.201763 containerd[1463]: time="2025-11-08T00:29:15.201679992Z" level=warning msg="cleaning up after shim disconnected" id=b3da4a5f95c0f9d4ba0a8c2269f9ff1779e27076a274deb1740fd9bb80fc40db namespace=k8s.io Nov 8 00:29:15.202131 containerd[1463]: time="2025-11-08T00:29:15.201804591Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:29:15.686639 containerd[1463]: time="2025-11-08T00:29:15.686553617Z" level=error msg="Failed to destroy network for sandbox \"aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:15.688283 containerd[1463]: time="2025-11-08T00:29:15.687492636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 00:29:15.698343 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b-shm.mount: Deactivated successfully. Nov 8 00:29:15.699478 containerd[1463]: time="2025-11-08T00:29:15.698940594Z" level=error msg="encountered an error cleaning up failed sandbox \"aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:15.699478 containerd[1463]: time="2025-11-08T00:29:15.699051527Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-969f74cdf-w6w2r,Uid:f4df0bd6-9275-4ffb-bc86-7dbb94791082,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:15.699739 kubelet[2553]: E1108 00:29:15.699325 2553 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:15.699739 kubelet[2553]: E1108 00:29:15.699418 2553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-969f74cdf-w6w2r" Nov 8 00:29:15.699739 kubelet[2553]: E1108 00:29:15.699454 2553 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-969f74cdf-w6w2r" Nov 8 00:29:15.700402 kubelet[2553]: E1108 00:29:15.699520 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-969f74cdf-w6w2r_calico-apiserver(f4df0bd6-9275-4ffb-bc86-7dbb94791082)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-969f74cdf-w6w2r_calico-apiserver(f4df0bd6-9275-4ffb-bc86-7dbb94791082)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-969f74cdf-w6w2r" podUID="f4df0bd6-9275-4ffb-bc86-7dbb94791082" Nov 8 00:29:15.757316 containerd[1463]: time="2025-11-08T00:29:15.757228901Z" level=error msg="Failed to destroy network for sandbox \"bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:15.760886 containerd[1463]: time="2025-11-08T00:29:15.759960296Z" level=error msg="encountered an error cleaning up failed sandbox \"bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:15.763876 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af-shm.mount: Deactivated successfully. Nov 8 00:29:15.765632 containerd[1463]: time="2025-11-08T00:29:15.765454741Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vbsgj,Uid:f25962ab-cc66-4ae1-b3d0-2209da78cffc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:15.766294 kubelet[2553]: E1108 00:29:15.766190 2553 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:15.766294 kubelet[2553]: E1108 00:29:15.766257 2553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-vbsgj" Nov 8 00:29:15.767851 kubelet[2553]: E1108 00:29:15.766290 2553 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-vbsgj" Nov 8 00:29:15.767851 kubelet[2553]: E1108 00:29:15.766360 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-vbsgj_kube-system(f25962ab-cc66-4ae1-b3d0-2209da78cffc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-vbsgj_kube-system(f25962ab-cc66-4ae1-b3d0-2209da78cffc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-vbsgj" podUID="f25962ab-cc66-4ae1-b3d0-2209da78cffc" Nov 8 00:29:15.819463 containerd[1463]: time="2025-11-08T00:29:15.818991269Z" level=error msg="Failed to destroy network for sandbox \"e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:15.829273 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d-shm.mount: Deactivated successfully. Nov 8 00:29:15.832478 containerd[1463]: time="2025-11-08T00:29:15.832421401Z" level=error msg="encountered an error cleaning up failed sandbox \"e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:15.832618 containerd[1463]: time="2025-11-08T00:29:15.832507290Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kn6nq,Uid:5e54b7a9-1c64-4152-ae7f-d4eec2188483,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:15.832820 containerd[1463]: time="2025-11-08T00:29:15.832781297Z" level=error msg="Failed to destroy network for sandbox \"ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:15.833247 containerd[1463]: time="2025-11-08T00:29:15.833211203Z" level=error msg="encountered an error cleaning up failed sandbox \"ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:15.833343 containerd[1463]: time="2025-11-08T00:29:15.833273616Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zfhqj,Uid:3f87c08c-b49d-4506-b03a-99a6bbfdb418,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:15.833628 kubelet[2553]: E1108 00:29:15.833576 2553 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:15.833781 kubelet[2553]: E1108 00:29:15.833665 2553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zfhqj" Nov 8 00:29:15.833781 kubelet[2553]: E1108 00:29:15.833700 2553 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zfhqj" Nov 8 00:29:15.833895 kubelet[2553]: E1108 00:29:15.833797 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-zfhqj_kube-system(3f87c08c-b49d-4506-b03a-99a6bbfdb418)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-zfhqj_kube-system(3f87c08c-b49d-4506-b03a-99a6bbfdb418)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-zfhqj" podUID="3f87c08c-b49d-4506-b03a-99a6bbfdb418" Nov 8 00:29:15.835601 kubelet[2553]: E1108 00:29:15.834775 2553 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:15.835601 kubelet[2553]: E1108 00:29:15.834855 2553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kn6nq" Nov 8 00:29:15.835601 kubelet[2553]: E1108 00:29:15.834918 2553 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kn6nq" Nov 8 00:29:15.836574 kubelet[2553]: E1108 00:29:15.834996 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kn6nq_calico-system(5e54b7a9-1c64-4152-ae7f-d4eec2188483)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kn6nq_calico-system(5e54b7a9-1c64-4152-ae7f-d4eec2188483)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kn6nq" podUID="5e54b7a9-1c64-4152-ae7f-d4eec2188483" Nov 8 00:29:15.857501 containerd[1463]: time="2025-11-08T00:29:15.857363852Z" level=error msg="Failed to destroy network for sandbox \"1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:15.858632 containerd[1463]: time="2025-11-08T00:29:15.858334865Z" level=error msg="encountered an error cleaning up failed sandbox \"1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:15.858993 containerd[1463]: time="2025-11-08T00:29:15.858943495Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66954587fc-lk9h4,Uid:56d5b52f-30bd-4997-b2d5-968bfbb3b185,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:15.860025 kubelet[2553]: E1108 00:29:15.859526 2553 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:15.860025 kubelet[2553]: E1108 00:29:15.859599 2553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66954587fc-lk9h4" Nov 8 00:29:15.860025 kubelet[2553]: E1108 00:29:15.859633 2553 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66954587fc-lk9h4" Nov 8 00:29:15.861125 kubelet[2553]: E1108 00:29:15.859694 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-66954587fc-lk9h4_calico-system(56d5b52f-30bd-4997-b2d5-968bfbb3b185)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-66954587fc-lk9h4_calico-system(56d5b52f-30bd-4997-b2d5-968bfbb3b185)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-66954587fc-lk9h4" podUID="56d5b52f-30bd-4997-b2d5-968bfbb3b185" Nov 8 00:29:15.867520 containerd[1463]: time="2025-11-08T00:29:15.867227285Z" level=error msg="Failed to destroy network for sandbox \"078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:15.867520 containerd[1463]: time="2025-11-08T00:29:15.867247928Z" level=error msg="Failed to destroy network for sandbox \"097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:15.868229 containerd[1463]: time="2025-11-08T00:29:15.868094331Z" level=error msg="encountered an error cleaning up failed sandbox \"097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:15.868331 containerd[1463]: time="2025-11-08T00:29:15.868222190Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-b28gg,Uid:418020d8-f003-4c3d-bcf6-3368810f5d40,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:15.868444 containerd[1463]: time="2025-11-08T00:29:15.868110758Z" level=error msg="encountered an error cleaning up failed sandbox \"078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:15.868444 containerd[1463]: time="2025-11-08T00:29:15.868365448Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66784f75f9-twnqc,Uid:e59ce4b6-f87c-444d-abb1-31c4a685274a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:15.869901 kubelet[2553]: E1108 00:29:15.868566 2553 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:15.869901 kubelet[2553]: E1108 00:29:15.868632 2553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66784f75f9-twnqc" Nov 8 00:29:15.869901 kubelet[2553]: E1108 00:29:15.868566 2553 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:15.869901 kubelet[2553]: E1108 00:29:15.868774 2553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-b28gg" Nov 8 00:29:15.870154 kubelet[2553]: E1108 00:29:15.868756 2553 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66784f75f9-twnqc" Nov 8 00:29:15.870154 kubelet[2553]: E1108 00:29:15.868802 2553 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-b28gg" Nov 8 00:29:15.870154 kubelet[2553]: E1108 00:29:15.868852 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-b28gg_calico-system(418020d8-f003-4c3d-bcf6-3368810f5d40)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-b28gg_calico-system(418020d8-f003-4c3d-bcf6-3368810f5d40)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-b28gg" podUID="418020d8-f003-4c3d-bcf6-3368810f5d40" Nov 8 00:29:15.870348 kubelet[2553]: E1108 00:29:15.868852 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66784f75f9-twnqc_calico-system(e59ce4b6-f87c-444d-abb1-31c4a685274a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66784f75f9-twnqc_calico-system(e59ce4b6-f87c-444d-abb1-31c4a685274a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66784f75f9-twnqc" podUID="e59ce4b6-f87c-444d-abb1-31c4a685274a" Nov 8 00:29:15.871417 containerd[1463]: time="2025-11-08T00:29:15.871267189Z" level=error msg="Failed to destroy network for sandbox \"8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:15.871890 containerd[1463]: time="2025-11-08T00:29:15.871840966Z" level=error msg="encountered an error cleaning up failed sandbox \"8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:15.872008 containerd[1463]: time="2025-11-08T00:29:15.871916757Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-969f74cdf-2lhrt,Uid:06ee44fb-81f0-4173-813e-506c57500250,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:15.872312 kubelet[2553]: E1108 00:29:15.872270 2553 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:15.872421 kubelet[2553]: E1108 00:29:15.872335 2553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-969f74cdf-2lhrt" Nov 8 00:29:15.872421 kubelet[2553]: E1108 00:29:15.872369 2553 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-969f74cdf-2lhrt" Nov 8 00:29:15.872573 kubelet[2553]: E1108 00:29:15.872435 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-969f74cdf-2lhrt_calico-apiserver(06ee44fb-81f0-4173-813e-506c57500250)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-969f74cdf-2lhrt_calico-apiserver(06ee44fb-81f0-4173-813e-506c57500250)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-969f74cdf-2lhrt" podUID="06ee44fb-81f0-4173-813e-506c57500250" Nov 8 00:29:16.318072 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842-shm.mount: Deactivated successfully. Nov 8 00:29:16.318222 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454-shm.mount: Deactivated successfully. Nov 8 00:29:16.318328 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f-shm.mount: Deactivated successfully. Nov 8 00:29:16.318437 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033-shm.mount: Deactivated successfully. Nov 8 00:29:16.318540 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1-shm.mount: Deactivated successfully. Nov 8 00:29:16.683158 kubelet[2553]: I1108 00:29:16.683109 2553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" Nov 8 00:29:16.685073 containerd[1463]: time="2025-11-08T00:29:16.685018994Z" level=info msg="StopPodSandbox for \"097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f\"" Nov 8 00:29:16.685326 containerd[1463]: time="2025-11-08T00:29:16.685285541Z" level=info msg="Ensure that sandbox 097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f in task-service has been cleanup successfully" Nov 8 00:29:16.687577 kubelet[2553]: I1108 00:29:16.687032 2553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" Nov 8 00:29:16.687872 containerd[1463]: time="2025-11-08T00:29:16.687841722Z" level=info msg="StopPodSandbox for \"1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842\"" Nov 8 00:29:16.688402 containerd[1463]: time="2025-11-08T00:29:16.688059254Z" level=info msg="Ensure that sandbox 1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842 in task-service has been cleanup successfully" Nov 8 00:29:16.695908 kubelet[2553]: I1108 00:29:16.695317 2553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" Nov 8 00:29:16.697015 containerd[1463]: time="2025-11-08T00:29:16.696441809Z" level=info msg="StopPodSandbox for \"078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033\"" Nov 8 00:29:16.698999 containerd[1463]: time="2025-11-08T00:29:16.697913231Z" level=info msg="Ensure that sandbox 078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033 in task-service has been cleanup successfully" Nov 8 00:29:16.708985 kubelet[2553]: I1108 00:29:16.708937 2553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" Nov 8 00:29:16.714518 containerd[1463]: time="2025-11-08T00:29:16.713879747Z" level=info msg="StopPodSandbox for \"bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af\"" Nov 8 00:29:16.714518 containerd[1463]: time="2025-11-08T00:29:16.714155411Z" level=info msg="Ensure that sandbox bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af in task-service has been cleanup successfully" Nov 8 00:29:16.719144 kubelet[2553]: I1108 00:29:16.718980 2553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" Nov 8 00:29:16.725394 containerd[1463]: time="2025-11-08T00:29:16.724872882Z" level=info msg="StopPodSandbox for \"ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1\"" Nov 8 00:29:16.725394 containerd[1463]: time="2025-11-08T00:29:16.725107469Z" level=info msg="Ensure that sandbox ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1 in task-service has been cleanup successfully" Nov 8 00:29:16.729736 kubelet[2553]: I1108 00:29:16.729166 2553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" Nov 8 00:29:16.734581 containerd[1463]: time="2025-11-08T00:29:16.734537324Z" level=info msg="StopPodSandbox for \"e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d\"" Nov 8 00:29:16.743050 containerd[1463]: time="2025-11-08T00:29:16.742998619Z" level=info msg="Ensure that sandbox e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d in task-service has been cleanup successfully" Nov 8 00:29:16.749267 kubelet[2553]: I1108 00:29:16.749229 2553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" Nov 8 00:29:16.751790 containerd[1463]: time="2025-11-08T00:29:16.751283735Z" level=info msg="StopPodSandbox for \"aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b\"" Nov 8 00:29:16.751790 containerd[1463]: time="2025-11-08T00:29:16.751527144Z" level=info msg="Ensure that sandbox aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b in task-service has been cleanup successfully" Nov 8 00:29:16.782831 kubelet[2553]: I1108 00:29:16.782782 2553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" Nov 8 00:29:16.785695 containerd[1463]: time="2025-11-08T00:29:16.785574768Z" level=info msg="StopPodSandbox for \"8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454\"" Nov 8 00:29:16.787341 containerd[1463]: time="2025-11-08T00:29:16.787288420Z" level=info msg="Ensure that sandbox 8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454 in task-service has been cleanup successfully" Nov 8 00:29:16.895922 containerd[1463]: time="2025-11-08T00:29:16.895847652Z" level=error msg="StopPodSandbox for \"078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033\" failed" error="failed to destroy network for sandbox \"078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:16.897149 kubelet[2553]: E1108 00:29:16.896833 2553 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" Nov 8 00:29:16.897149 kubelet[2553]: E1108 00:29:16.896919 2553 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033"} Nov 8 00:29:16.897149 kubelet[2553]: E1108 00:29:16.897011 2553 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e59ce4b6-f87c-444d-abb1-31c4a685274a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:29:16.897149 kubelet[2553]: E1108 00:29:16.897049 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e59ce4b6-f87c-444d-abb1-31c4a685274a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66784f75f9-twnqc" podUID="e59ce4b6-f87c-444d-abb1-31c4a685274a" Nov 8 00:29:16.897603 containerd[1463]: time="2025-11-08T00:29:16.896925346Z" level=error msg="StopPodSandbox for \"097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f\" failed" error="failed to destroy network for sandbox \"097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:16.900708 kubelet[2553]: E1108 00:29:16.899701 2553 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" Nov 8 00:29:16.900708 kubelet[2553]: E1108 00:29:16.899795 2553 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f"} Nov 8 00:29:16.900708 kubelet[2553]: E1108 00:29:16.899885 2553 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"418020d8-f003-4c3d-bcf6-3368810f5d40\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:29:16.900708 kubelet[2553]: E1108 00:29:16.899945 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"418020d8-f003-4c3d-bcf6-3368810f5d40\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-b28gg" podUID="418020d8-f003-4c3d-bcf6-3368810f5d40" Nov 8 00:29:16.934417 containerd[1463]: time="2025-11-08T00:29:16.934233216Z" level=error msg="StopPodSandbox for \"1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842\" failed" error="failed to destroy network for sandbox \"1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:16.935627 kubelet[2553]: E1108 00:29:16.935169 2553 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" Nov 8 00:29:16.935627 kubelet[2553]: E1108 00:29:16.935240 2553 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842"} Nov 8 00:29:16.935627 kubelet[2553]: E1108 00:29:16.935492 2553 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"56d5b52f-30bd-4997-b2d5-968bfbb3b185\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:29:16.936615 kubelet[2553]: E1108 00:29:16.935869 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"56d5b52f-30bd-4997-b2d5-968bfbb3b185\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-66954587fc-lk9h4" podUID="56d5b52f-30bd-4997-b2d5-968bfbb3b185" Nov 8 00:29:16.941822 containerd[1463]: time="2025-11-08T00:29:16.941766160Z" level=error msg="StopPodSandbox for \"bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af\" failed" error="failed to destroy network for sandbox \"bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:16.942151 kubelet[2553]: E1108 00:29:16.942025 2553 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" Nov 8 00:29:16.942151 kubelet[2553]: E1108 00:29:16.942095 2553 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af"} Nov 8 00:29:16.942441 kubelet[2553]: E1108 00:29:16.942146 2553 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f25962ab-cc66-4ae1-b3d0-2209da78cffc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:29:16.942441 kubelet[2553]: E1108 00:29:16.942184 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f25962ab-cc66-4ae1-b3d0-2209da78cffc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-vbsgj" podUID="f25962ab-cc66-4ae1-b3d0-2209da78cffc" Nov 8 00:29:16.955789 containerd[1463]: time="2025-11-08T00:29:16.955697088Z" level=error msg="StopPodSandbox for \"ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1\" failed" error="failed to destroy network for sandbox \"ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:16.956435 kubelet[2553]: E1108 00:29:16.956230 2553 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" Nov 8 00:29:16.956435 kubelet[2553]: E1108 00:29:16.956301 2553 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1"} Nov 8 00:29:16.956435 kubelet[2553]: E1108 00:29:16.956352 2553 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3f87c08c-b49d-4506-b03a-99a6bbfdb418\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:29:16.956435 kubelet[2553]: E1108 00:29:16.956389 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3f87c08c-b49d-4506-b03a-99a6bbfdb418\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-zfhqj" podUID="3f87c08c-b49d-4506-b03a-99a6bbfdb418" Nov 8 00:29:16.963577 containerd[1463]: time="2025-11-08T00:29:16.963118057Z" level=error msg="StopPodSandbox for \"e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d\" failed" error="failed to destroy network for sandbox \"e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:16.964071 containerd[1463]: time="2025-11-08T00:29:16.963827585Z" level=error msg="StopPodSandbox for \"8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454\" failed" error="failed to destroy network for sandbox \"8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:16.964163 kubelet[2553]: E1108 00:29:16.963455 2553 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" Nov 8 00:29:16.964163 kubelet[2553]: E1108 00:29:16.963939 2553 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d"} Nov 8 00:29:16.964163 kubelet[2553]: E1108 00:29:16.963994 2553 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5e54b7a9-1c64-4152-ae7f-d4eec2188483\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:29:16.968821 kubelet[2553]: E1108 00:29:16.964673 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5e54b7a9-1c64-4152-ae7f-d4eec2188483\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kn6nq" podUID="5e54b7a9-1c64-4152-ae7f-d4eec2188483" Nov 8 00:29:16.968821 kubelet[2553]: E1108 00:29:16.964906 2553 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" Nov 8 00:29:16.968821 kubelet[2553]: E1108 00:29:16.964951 2553 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454"} Nov 8 00:29:16.968821 kubelet[2553]: E1108 00:29:16.964992 2553 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"06ee44fb-81f0-4173-813e-506c57500250\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:29:16.969229 kubelet[2553]: E1108 00:29:16.965024 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"06ee44fb-81f0-4173-813e-506c57500250\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-969f74cdf-2lhrt" podUID="06ee44fb-81f0-4173-813e-506c57500250" Nov 8 00:29:16.974076 containerd[1463]: time="2025-11-08T00:29:16.974003006Z" level=error msg="StopPodSandbox for \"aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b\" failed" error="failed to destroy network for sandbox \"aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:16.974490 kubelet[2553]: E1108 00:29:16.974241 2553 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" Nov 8 00:29:16.974490 kubelet[2553]: E1108 00:29:16.974328 2553 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b"} Nov 8 00:29:16.974490 kubelet[2553]: E1108 00:29:16.974374 2553 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f4df0bd6-9275-4ffb-bc86-7dbb94791082\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:29:16.974490 kubelet[2553]: E1108 00:29:16.974414 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f4df0bd6-9275-4ffb-bc86-7dbb94791082\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-969f74cdf-w6w2r" podUID="f4df0bd6-9275-4ffb-bc86-7dbb94791082" Nov 8 00:29:23.319418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2110030913.mount: Deactivated successfully. Nov 8 00:29:23.350879 containerd[1463]: time="2025-11-08T00:29:23.350811346Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:23.352483 containerd[1463]: time="2025-11-08T00:29:23.352421057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 8 00:29:23.355399 containerd[1463]: time="2025-11-08T00:29:23.353814455Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:23.357829 containerd[1463]: time="2025-11-08T00:29:23.356681337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:23.357829 containerd[1463]: time="2025-11-08T00:29:23.357643477Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.669457144s" Nov 8 00:29:23.357829 containerd[1463]: time="2025-11-08T00:29:23.357690184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 8 00:29:23.383814 containerd[1463]: time="2025-11-08T00:29:23.383751240Z" level=info msg="CreateContainer within sandbox \"558df16444074dd3071fc9431cc0dbc588884fbe7d01c165e8ffe0d9cfdba79b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 00:29:23.411076 containerd[1463]: time="2025-11-08T00:29:23.411020897Z" level=info msg="CreateContainer within sandbox \"558df16444074dd3071fc9431cc0dbc588884fbe7d01c165e8ffe0d9cfdba79b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1d1ca77d884c400e9fb6cc5f112c077f695ee5643f1199a91665b5d832dd9992\"" Nov 8 00:29:23.412200 containerd[1463]: time="2025-11-08T00:29:23.411914695Z" level=info msg="StartContainer for \"1d1ca77d884c400e9fb6cc5f112c077f695ee5643f1199a91665b5d832dd9992\"" Nov 8 00:29:23.453974 systemd[1]: Started cri-containerd-1d1ca77d884c400e9fb6cc5f112c077f695ee5643f1199a91665b5d832dd9992.scope - libcontainer container 1d1ca77d884c400e9fb6cc5f112c077f695ee5643f1199a91665b5d832dd9992. Nov 8 00:29:23.506337 containerd[1463]: time="2025-11-08T00:29:23.506264202Z" level=info msg="StartContainer for \"1d1ca77d884c400e9fb6cc5f112c077f695ee5643f1199a91665b5d832dd9992\" returns successfully" Nov 8 00:29:23.638230 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 00:29:23.638416 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 00:29:23.750131 containerd[1463]: time="2025-11-08T00:29:23.750082345Z" level=info msg="StopPodSandbox for \"1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842\"" Nov 8 00:29:23.891272 kubelet[2553]: I1108 00:29:23.890992 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-98kch" podStartSLOduration=2.018741732 podStartE2EDuration="18.890966529s" podCreationTimestamp="2025-11-08 00:29:05 +0000 UTC" firstStartedPulling="2025-11-08 00:29:06.486845914 +0000 UTC m=+25.231280765" lastFinishedPulling="2025-11-08 00:29:23.359070721 +0000 UTC m=+42.103505562" observedRunningTime="2025-11-08 00:29:23.889049937 +0000 UTC m=+42.633484801" watchObservedRunningTime="2025-11-08 00:29:23.890966529 +0000 UTC m=+42.635401391" Nov 8 00:29:23.967097 containerd[1463]: 2025-11-08 00:29:23.889 [INFO][3699] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" Nov 8 00:29:23.967097 containerd[1463]: 2025-11-08 00:29:23.890 [INFO][3699] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" iface="eth0" netns="/var/run/netns/cni-cb135d79-c157-e21d-53c0-d33ded27061b" Nov 8 00:29:23.967097 containerd[1463]: 2025-11-08 00:29:23.893 [INFO][3699] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" iface="eth0" netns="/var/run/netns/cni-cb135d79-c157-e21d-53c0-d33ded27061b" Nov 8 00:29:23.967097 containerd[1463]: 2025-11-08 00:29:23.893 [INFO][3699] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" iface="eth0" netns="/var/run/netns/cni-cb135d79-c157-e21d-53c0-d33ded27061b" Nov 8 00:29:23.967097 containerd[1463]: 2025-11-08 00:29:23.893 [INFO][3699] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" Nov 8 00:29:23.967097 containerd[1463]: 2025-11-08 00:29:23.893 [INFO][3699] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" Nov 8 00:29:23.967097 containerd[1463]: 2025-11-08 00:29:23.940 [INFO][3712] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" HandleID="k8s-pod-network.1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-whisker--66954587fc--lk9h4-eth0" Nov 8 00:29:23.967097 containerd[1463]: 2025-11-08 00:29:23.940 [INFO][3712] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:23.967097 containerd[1463]: 2025-11-08 00:29:23.940 [INFO][3712] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:23.967097 containerd[1463]: 2025-11-08 00:29:23.956 [WARNING][3712] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" HandleID="k8s-pod-network.1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-whisker--66954587fc--lk9h4-eth0" Nov 8 00:29:23.967097 containerd[1463]: 2025-11-08 00:29:23.956 [INFO][3712] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" HandleID="k8s-pod-network.1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-whisker--66954587fc--lk9h4-eth0" Nov 8 00:29:23.967097 containerd[1463]: 2025-11-08 00:29:23.959 [INFO][3712] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:23.967097 containerd[1463]: 2025-11-08 00:29:23.964 [INFO][3699] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" Nov 8 00:29:23.968195 containerd[1463]: time="2025-11-08T00:29:23.967245908Z" level=info msg="TearDown network for sandbox \"1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842\" successfully" Nov 8 00:29:23.968195 containerd[1463]: time="2025-11-08T00:29:23.967284877Z" level=info msg="StopPodSandbox for \"1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842\" returns successfully" Nov 8 00:29:24.019205 kubelet[2553]: I1108 00:29:24.018653 2553 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56d5b52f-30bd-4997-b2d5-968bfbb3b185-whisker-ca-bundle\") pod \"56d5b52f-30bd-4997-b2d5-968bfbb3b185\" (UID: \"56d5b52f-30bd-4997-b2d5-968bfbb3b185\") " Nov 8 00:29:24.019205 kubelet[2553]: I1108 00:29:24.018760 2553 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt25g\" (UniqueName: \"kubernetes.io/projected/56d5b52f-30bd-4997-b2d5-968bfbb3b185-kube-api-access-qt25g\") pod \"56d5b52f-30bd-4997-b2d5-968bfbb3b185\" (UID: \"56d5b52f-30bd-4997-b2d5-968bfbb3b185\") " Nov 8 00:29:24.019205 kubelet[2553]: I1108 00:29:24.018795 2553 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/56d5b52f-30bd-4997-b2d5-968bfbb3b185-whisker-backend-key-pair\") pod \"56d5b52f-30bd-4997-b2d5-968bfbb3b185\" (UID: \"56d5b52f-30bd-4997-b2d5-968bfbb3b185\") " Nov 8 00:29:24.027188 kubelet[2553]: I1108 00:29:24.026941 2553 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56d5b52f-30bd-4997-b2d5-968bfbb3b185-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "56d5b52f-30bd-4997-b2d5-968bfbb3b185" (UID: "56d5b52f-30bd-4997-b2d5-968bfbb3b185"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:29:24.030866 kubelet[2553]: I1108 00:29:24.030696 2553 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56d5b52f-30bd-4997-b2d5-968bfbb3b185-kube-api-access-qt25g" (OuterVolumeSpecName: "kube-api-access-qt25g") pod "56d5b52f-30bd-4997-b2d5-968bfbb3b185" (UID: "56d5b52f-30bd-4997-b2d5-968bfbb3b185"). InnerVolumeSpecName "kube-api-access-qt25g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:29:24.031308 kubelet[2553]: I1108 00:29:24.031170 2553 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56d5b52f-30bd-4997-b2d5-968bfbb3b185-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "56d5b52f-30bd-4997-b2d5-968bfbb3b185" (UID: "56d5b52f-30bd-4997-b2d5-968bfbb3b185"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:29:24.120116 kubelet[2553]: I1108 00:29:24.119775 2553 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qt25g\" (UniqueName: \"kubernetes.io/projected/56d5b52f-30bd-4997-b2d5-968bfbb3b185-kube-api-access-qt25g\") on node \"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" DevicePath \"\"" Nov 8 00:29:24.120116 kubelet[2553]: I1108 00:29:24.119978 2553 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/56d5b52f-30bd-4997-b2d5-968bfbb3b185-whisker-backend-key-pair\") on node \"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" DevicePath \"\"" Nov 8 00:29:24.120116 kubelet[2553]: I1108 00:29:24.120025 2553 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56d5b52f-30bd-4997-b2d5-968bfbb3b185-whisker-ca-bundle\") on node \"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562\" DevicePath \"\"" Nov 8 00:29:24.319678 systemd[1]: run-netns-cni\x2dcb135d79\x2dc157\x2de21d\x2d53c0\x2dd33ded27061b.mount: Deactivated successfully. Nov 8 00:29:24.319867 systemd[1]: var-lib-kubelet-pods-56d5b52f\x2d30bd\x2d4997\x2db2d5\x2d968bfbb3b185-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 8 00:29:24.320057 systemd[1]: var-lib-kubelet-pods-56d5b52f\x2d30bd\x2d4997\x2db2d5\x2d968bfbb3b185-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqt25g.mount: Deactivated successfully. Nov 8 00:29:24.822878 systemd[1]: Removed slice kubepods-besteffort-pod56d5b52f_30bd_4997_b2d5_968bfbb3b185.slice - libcontainer container kubepods-besteffort-pod56d5b52f_30bd_4997_b2d5_968bfbb3b185.slice. Nov 8 00:29:24.904160 systemd[1]: Created slice kubepods-besteffort-pod5df9e569_a84b_4372_9851_dc5eac1e2252.slice - libcontainer container kubepods-besteffort-pod5df9e569_a84b_4372_9851_dc5eac1e2252.slice. Nov 8 00:29:24.916117 kubelet[2553]: I1108 00:29:24.916075 2553 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:29:24.929779 kubelet[2553]: I1108 00:29:24.928198 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5df9e569-a84b-4372-9851-dc5eac1e2252-whisker-ca-bundle\") pod \"whisker-c54fb4dfc-znnpn\" (UID: \"5df9e569-a84b-4372-9851-dc5eac1e2252\") " pod="calico-system/whisker-c54fb4dfc-znnpn" Nov 8 00:29:24.929779 kubelet[2553]: I1108 00:29:24.928259 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5df9e569-a84b-4372-9851-dc5eac1e2252-whisker-backend-key-pair\") pod \"whisker-c54fb4dfc-znnpn\" (UID: \"5df9e569-a84b-4372-9851-dc5eac1e2252\") " pod="calico-system/whisker-c54fb4dfc-znnpn" Nov 8 00:29:24.929779 kubelet[2553]: I1108 00:29:24.928298 2553 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbvcz\" (UniqueName: \"kubernetes.io/projected/5df9e569-a84b-4372-9851-dc5eac1e2252-kube-api-access-lbvcz\") pod \"whisker-c54fb4dfc-znnpn\" (UID: \"5df9e569-a84b-4372-9851-dc5eac1e2252\") " pod="calico-system/whisker-c54fb4dfc-znnpn" Nov 8 00:29:25.212949 containerd[1463]: time="2025-11-08T00:29:25.211541199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c54fb4dfc-znnpn,Uid:5df9e569-a84b-4372-9851-dc5eac1e2252,Namespace:calico-system,Attempt:0,}" Nov 8 00:29:25.408709 kubelet[2553]: I1108 00:29:25.408651 2553 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56d5b52f-30bd-4997-b2d5-968bfbb3b185" path="/var/lib/kubelet/pods/56d5b52f-30bd-4997-b2d5-968bfbb3b185/volumes" Nov 8 00:29:25.471692 systemd-networkd[1371]: cali27b813c3112: Link UP Nov 8 00:29:25.472913 systemd-networkd[1371]: cali27b813c3112: Gained carrier Nov 8 00:29:25.507685 containerd[1463]: 2025-11-08 00:29:25.287 [INFO][3833] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:29:25.507685 containerd[1463]: 2025-11-08 00:29:25.307 [INFO][3833] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-whisker--c54fb4dfc--znnpn-eth0 whisker-c54fb4dfc- calico-system 5df9e569-a84b-4372-9851-dc5eac1e2252 907 0 2025-11-08 00:29:24 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:c54fb4dfc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562 whisker-c54fb4dfc-znnpn eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali27b813c3112 [] [] }} ContainerID="c759dcfc8759cd8b2123d483c3b67a2806fc052ef4e34f01a6524f107428d338" Namespace="calico-system" Pod="whisker-c54fb4dfc-znnpn" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-whisker--c54fb4dfc--znnpn-" Nov 8 00:29:25.507685 containerd[1463]: 2025-11-08 00:29:25.307 [INFO][3833] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c759dcfc8759cd8b2123d483c3b67a2806fc052ef4e34f01a6524f107428d338" Namespace="calico-system" Pod="whisker-c54fb4dfc-znnpn" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-whisker--c54fb4dfc--znnpn-eth0" Nov 8 00:29:25.507685 containerd[1463]: 2025-11-08 00:29:25.381 [INFO][3867] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c759dcfc8759cd8b2123d483c3b67a2806fc052ef4e34f01a6524f107428d338" HandleID="k8s-pod-network.c759dcfc8759cd8b2123d483c3b67a2806fc052ef4e34f01a6524f107428d338" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-whisker--c54fb4dfc--znnpn-eth0" Nov 8 00:29:25.507685 containerd[1463]: 2025-11-08 00:29:25.383 [INFO][3867] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c759dcfc8759cd8b2123d483c3b67a2806fc052ef4e34f01a6524f107428d338" HandleID="k8s-pod-network.c759dcfc8759cd8b2123d483c3b67a2806fc052ef4e34f01a6524f107428d338" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-whisker--c54fb4dfc--znnpn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f7e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", "pod":"whisker-c54fb4dfc-znnpn", "timestamp":"2025-11-08 00:29:25.381022906 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:29:25.507685 containerd[1463]: 2025-11-08 00:29:25.383 [INFO][3867] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:25.507685 containerd[1463]: 2025-11-08 00:29:25.383 [INFO][3867] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:25.507685 containerd[1463]: 2025-11-08 00:29:25.383 [INFO][3867] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562' Nov 8 00:29:25.507685 containerd[1463]: 2025-11-08 00:29:25.395 [INFO][3867] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c759dcfc8759cd8b2123d483c3b67a2806fc052ef4e34f01a6524f107428d338" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:25.507685 containerd[1463]: 2025-11-08 00:29:25.403 [INFO][3867] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:25.507685 containerd[1463]: 2025-11-08 00:29:25.414 [INFO][3867] ipam/ipam.go 511: Trying affinity for 192.168.38.192/26 host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:25.507685 containerd[1463]: 2025-11-08 00:29:25.417 [INFO][3867] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.192/26 host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:25.507685 containerd[1463]: 2025-11-08 00:29:25.420 [INFO][3867] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.192/26 host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:25.507685 containerd[1463]: 2025-11-08 00:29:25.420 [INFO][3867] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.192/26 handle="k8s-pod-network.c759dcfc8759cd8b2123d483c3b67a2806fc052ef4e34f01a6524f107428d338" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:25.507685 containerd[1463]: 2025-11-08 00:29:25.422 [INFO][3867] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c759dcfc8759cd8b2123d483c3b67a2806fc052ef4e34f01a6524f107428d338 Nov 8 00:29:25.507685 containerd[1463]: 2025-11-08 00:29:25.429 [INFO][3867] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.192/26 handle="k8s-pod-network.c759dcfc8759cd8b2123d483c3b67a2806fc052ef4e34f01a6524f107428d338" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:25.507685 containerd[1463]: 2025-11-08 00:29:25.449 [INFO][3867] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.193/26] block=192.168.38.192/26 handle="k8s-pod-network.c759dcfc8759cd8b2123d483c3b67a2806fc052ef4e34f01a6524f107428d338" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:25.507685 containerd[1463]: 2025-11-08 00:29:25.449 [INFO][3867] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.193/26] handle="k8s-pod-network.c759dcfc8759cd8b2123d483c3b67a2806fc052ef4e34f01a6524f107428d338" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:25.507685 containerd[1463]: 2025-11-08 00:29:25.449 [INFO][3867] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:25.507685 containerd[1463]: 2025-11-08 00:29:25.450 [INFO][3867] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.193/26] IPv6=[] ContainerID="c759dcfc8759cd8b2123d483c3b67a2806fc052ef4e34f01a6524f107428d338" HandleID="k8s-pod-network.c759dcfc8759cd8b2123d483c3b67a2806fc052ef4e34f01a6524f107428d338" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-whisker--c54fb4dfc--znnpn-eth0" Nov 8 00:29:25.510378 containerd[1463]: 2025-11-08 00:29:25.454 [INFO][3833] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c759dcfc8759cd8b2123d483c3b67a2806fc052ef4e34f01a6524f107428d338" Namespace="calico-system" Pod="whisker-c54fb4dfc-znnpn" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-whisker--c54fb4dfc--znnpn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-whisker--c54fb4dfc--znnpn-eth0", GenerateName:"whisker-c54fb4dfc-", Namespace:"calico-system", SelfLink:"", UID:"5df9e569-a84b-4372-9851-dc5eac1e2252", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c54fb4dfc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", ContainerID:"", Pod:"whisker-c54fb4dfc-znnpn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.38.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali27b813c3112", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:25.510378 containerd[1463]: 2025-11-08 00:29:25.455 [INFO][3833] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.193/32] ContainerID="c759dcfc8759cd8b2123d483c3b67a2806fc052ef4e34f01a6524f107428d338" Namespace="calico-system" Pod="whisker-c54fb4dfc-znnpn" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-whisker--c54fb4dfc--znnpn-eth0" Nov 8 00:29:25.510378 containerd[1463]: 2025-11-08 00:29:25.456 [INFO][3833] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali27b813c3112 ContainerID="c759dcfc8759cd8b2123d483c3b67a2806fc052ef4e34f01a6524f107428d338" Namespace="calico-system" Pod="whisker-c54fb4dfc-znnpn" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-whisker--c54fb4dfc--znnpn-eth0" Nov 8 00:29:25.510378 containerd[1463]: 2025-11-08 00:29:25.471 [INFO][3833] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c759dcfc8759cd8b2123d483c3b67a2806fc052ef4e34f01a6524f107428d338" Namespace="calico-system" Pod="whisker-c54fb4dfc-znnpn" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-whisker--c54fb4dfc--znnpn-eth0" Nov 8 00:29:25.510378 containerd[1463]: 2025-11-08 00:29:25.474 [INFO][3833] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c759dcfc8759cd8b2123d483c3b67a2806fc052ef4e34f01a6524f107428d338" Namespace="calico-system" Pod="whisker-c54fb4dfc-znnpn" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-whisker--c54fb4dfc--znnpn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-whisker--c54fb4dfc--znnpn-eth0", GenerateName:"whisker-c54fb4dfc-", Namespace:"calico-system", SelfLink:"", UID:"5df9e569-a84b-4372-9851-dc5eac1e2252", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"c54fb4dfc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", ContainerID:"c759dcfc8759cd8b2123d483c3b67a2806fc052ef4e34f01a6524f107428d338", Pod:"whisker-c54fb4dfc-znnpn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.38.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali27b813c3112", MAC:"12:e2:d3:a7:25:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:25.510378 containerd[1463]: 2025-11-08 00:29:25.500 [INFO][3833] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c759dcfc8759cd8b2123d483c3b67a2806fc052ef4e34f01a6524f107428d338" Namespace="calico-system" Pod="whisker-c54fb4dfc-znnpn" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-whisker--c54fb4dfc--znnpn-eth0" Nov 8 00:29:25.548699 containerd[1463]: time="2025-11-08T00:29:25.548475515Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:25.548699 containerd[1463]: time="2025-11-08T00:29:25.548569715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:25.548699 containerd[1463]: time="2025-11-08T00:29:25.548597150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:25.550712 containerd[1463]: time="2025-11-08T00:29:25.549487407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:25.600852 systemd[1]: run-containerd-runc-k8s.io-c759dcfc8759cd8b2123d483c3b67a2806fc052ef4e34f01a6524f107428d338-runc.nRItNR.mount: Deactivated successfully. Nov 8 00:29:25.618539 systemd[1]: Started cri-containerd-c759dcfc8759cd8b2123d483c3b67a2806fc052ef4e34f01a6524f107428d338.scope - libcontainer container c759dcfc8759cd8b2123d483c3b67a2806fc052ef4e34f01a6524f107428d338. Nov 8 00:29:25.713314 containerd[1463]: time="2025-11-08T00:29:25.713235152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c54fb4dfc-znnpn,Uid:5df9e569-a84b-4372-9851-dc5eac1e2252,Namespace:calico-system,Attempt:0,} returns sandbox id \"c759dcfc8759cd8b2123d483c3b67a2806fc052ef4e34f01a6524f107428d338\"" Nov 8 00:29:25.718104 containerd[1463]: time="2025-11-08T00:29:25.717812397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:29:25.891507 containerd[1463]: time="2025-11-08T00:29:25.891280271Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:25.892981 containerd[1463]: time="2025-11-08T00:29:25.892907777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:29:25.893329 containerd[1463]: time="2025-11-08T00:29:25.892904468Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:29:25.894056 kubelet[2553]: E1108 00:29:25.893971 2553 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:29:25.894180 kubelet[2553]: E1108 00:29:25.894065 2553 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:29:25.894384 kubelet[2553]: E1108 00:29:25.894323 2553 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f246e03119bb4746838713acdb1a11df,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lbvcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c54fb4dfc-znnpn_calico-system(5df9e569-a84b-4372-9851-dc5eac1e2252): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:25.899325 containerd[1463]: time="2025-11-08T00:29:25.898634164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:29:26.058657 containerd[1463]: time="2025-11-08T00:29:26.058570412Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:26.060390 containerd[1463]: time="2025-11-08T00:29:26.060227818Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:29:26.060390 containerd[1463]: time="2025-11-08T00:29:26.060283246Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:29:26.060597 kubelet[2553]: E1108 00:29:26.060555 2553 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:29:26.061180 kubelet[2553]: E1108 00:29:26.060619 2553 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:29:26.061249 kubelet[2553]: E1108 00:29:26.060826 2553 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lbvcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c54fb4dfc-znnpn_calico-system(5df9e569-a84b-4372-9851-dc5eac1e2252): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:26.062461 kubelet[2553]: E1108 00:29:26.062353 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c54fb4dfc-znnpn" podUID="5df9e569-a84b-4372-9851-dc5eac1e2252" Nov 8 00:29:26.828662 kubelet[2553]: E1108 00:29:26.828600 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c54fb4dfc-znnpn" podUID="5df9e569-a84b-4372-9851-dc5eac1e2252" Nov 8 00:29:27.084012 systemd-networkd[1371]: cali27b813c3112: Gained IPv6LL Nov 8 00:29:27.403929 containerd[1463]: time="2025-11-08T00:29:27.401973269Z" level=info msg="StopPodSandbox for \"078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033\"" Nov 8 00:29:27.516662 containerd[1463]: 2025-11-08 00:29:27.465 [INFO][3970] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" Nov 8 00:29:27.516662 containerd[1463]: 2025-11-08 00:29:27.465 [INFO][3970] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" iface="eth0" netns="/var/run/netns/cni-671a2015-f433-661f-3efb-ca24c1d4d6d1" Nov 8 00:29:27.516662 containerd[1463]: 2025-11-08 00:29:27.466 [INFO][3970] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" iface="eth0" netns="/var/run/netns/cni-671a2015-f433-661f-3efb-ca24c1d4d6d1" Nov 8 00:29:27.516662 containerd[1463]: 2025-11-08 00:29:27.466 [INFO][3970] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" iface="eth0" netns="/var/run/netns/cni-671a2015-f433-661f-3efb-ca24c1d4d6d1" Nov 8 00:29:27.516662 containerd[1463]: 2025-11-08 00:29:27.466 [INFO][3970] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" Nov 8 00:29:27.516662 containerd[1463]: 2025-11-08 00:29:27.466 [INFO][3970] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" Nov 8 00:29:27.516662 containerd[1463]: 2025-11-08 00:29:27.501 [INFO][3977] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" HandleID="k8s-pod-network.078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--kube--controllers--66784f75f9--twnqc-eth0" Nov 8 00:29:27.516662 containerd[1463]: 2025-11-08 00:29:27.502 [INFO][3977] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:27.516662 containerd[1463]: 2025-11-08 00:29:27.502 [INFO][3977] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:27.516662 containerd[1463]: 2025-11-08 00:29:27.511 [WARNING][3977] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" HandleID="k8s-pod-network.078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--kube--controllers--66784f75f9--twnqc-eth0" Nov 8 00:29:27.516662 containerd[1463]: 2025-11-08 00:29:27.511 [INFO][3977] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" HandleID="k8s-pod-network.078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--kube--controllers--66784f75f9--twnqc-eth0" Nov 8 00:29:27.516662 containerd[1463]: 2025-11-08 00:29:27.513 [INFO][3977] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:27.516662 containerd[1463]: 2025-11-08 00:29:27.515 [INFO][3970] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" Nov 8 00:29:27.520042 containerd[1463]: time="2025-11-08T00:29:27.519842662Z" level=info msg="TearDown network for sandbox \"078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033\" successfully" Nov 8 00:29:27.520042 containerd[1463]: time="2025-11-08T00:29:27.519888885Z" level=info msg="StopPodSandbox for \"078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033\" returns successfully" Nov 8 00:29:27.521104 containerd[1463]: time="2025-11-08T00:29:27.521068347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66784f75f9-twnqc,Uid:e59ce4b6-f87c-444d-abb1-31c4a685274a,Namespace:calico-system,Attempt:1,}" Nov 8 00:29:27.524122 systemd[1]: run-netns-cni\x2d671a2015\x2df433\x2d661f\x2d3efb\x2dca24c1d4d6d1.mount: Deactivated successfully. Nov 8 00:29:27.676993 systemd-networkd[1371]: cali37b8e5627c5: Link UP Nov 8 00:29:27.678916 systemd-networkd[1371]: cali37b8e5627c5: Gained carrier Nov 8 00:29:27.704930 containerd[1463]: 2025-11-08 00:29:27.580 [INFO][3984] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:29:27.704930 containerd[1463]: 2025-11-08 00:29:27.593 [INFO][3984] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--kube--controllers--66784f75f9--twnqc-eth0 calico-kube-controllers-66784f75f9- calico-system e59ce4b6-f87c-444d-abb1-31c4a685274a 932 0 2025-11-08 00:29:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:66784f75f9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562 calico-kube-controllers-66784f75f9-twnqc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali37b8e5627c5 [] [] }} ContainerID="b33cb0cd471cb1f4889dc2df72719d1ac507c76fefb9340321cd8e58fd4d6f4b" Namespace="calico-system" Pod="calico-kube-controllers-66784f75f9-twnqc" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--kube--controllers--66784f75f9--twnqc-" Nov 8 00:29:27.704930 containerd[1463]: 2025-11-08 00:29:27.594 [INFO][3984] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b33cb0cd471cb1f4889dc2df72719d1ac507c76fefb9340321cd8e58fd4d6f4b" Namespace="calico-system" Pod="calico-kube-controllers-66784f75f9-twnqc" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--kube--controllers--66784f75f9--twnqc-eth0" Nov 8 00:29:27.704930 containerd[1463]: 2025-11-08 00:29:27.628 [INFO][3995] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b33cb0cd471cb1f4889dc2df72719d1ac507c76fefb9340321cd8e58fd4d6f4b" HandleID="k8s-pod-network.b33cb0cd471cb1f4889dc2df72719d1ac507c76fefb9340321cd8e58fd4d6f4b" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--kube--controllers--66784f75f9--twnqc-eth0" Nov 8 00:29:27.704930 containerd[1463]: 2025-11-08 00:29:27.628 [INFO][3995] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b33cb0cd471cb1f4889dc2df72719d1ac507c76fefb9340321cd8e58fd4d6f4b" HandleID="k8s-pod-network.b33cb0cd471cb1f4889dc2df72719d1ac507c76fefb9340321cd8e58fd4d6f4b" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--kube--controllers--66784f75f9--twnqc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", "pod":"calico-kube-controllers-66784f75f9-twnqc", "timestamp":"2025-11-08 00:29:27.628187936 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:29:27.704930 containerd[1463]: 2025-11-08 00:29:27.628 [INFO][3995] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:27.704930 containerd[1463]: 2025-11-08 00:29:27.629 [INFO][3995] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:27.704930 containerd[1463]: 2025-11-08 00:29:27.629 [INFO][3995] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562' Nov 8 00:29:27.704930 containerd[1463]: 2025-11-08 00:29:27.637 [INFO][3995] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b33cb0cd471cb1f4889dc2df72719d1ac507c76fefb9340321cd8e58fd4d6f4b" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:27.704930 containerd[1463]: 2025-11-08 00:29:27.642 [INFO][3995] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:27.704930 containerd[1463]: 2025-11-08 00:29:27.648 [INFO][3995] ipam/ipam.go 511: Trying affinity for 192.168.38.192/26 host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:27.704930 containerd[1463]: 2025-11-08 00:29:27.651 [INFO][3995] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.192/26 host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:27.704930 containerd[1463]: 2025-11-08 00:29:27.654 [INFO][3995] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.192/26 host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:27.704930 containerd[1463]: 2025-11-08 00:29:27.654 [INFO][3995] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.192/26 handle="k8s-pod-network.b33cb0cd471cb1f4889dc2df72719d1ac507c76fefb9340321cd8e58fd4d6f4b" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:27.704930 containerd[1463]: 2025-11-08 00:29:27.656 [INFO][3995] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b33cb0cd471cb1f4889dc2df72719d1ac507c76fefb9340321cd8e58fd4d6f4b Nov 8 00:29:27.704930 containerd[1463]: 2025-11-08 00:29:27.660 [INFO][3995] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.192/26 handle="k8s-pod-network.b33cb0cd471cb1f4889dc2df72719d1ac507c76fefb9340321cd8e58fd4d6f4b" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:27.704930 containerd[1463]: 2025-11-08 00:29:27.669 [INFO][3995] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.194/26] block=192.168.38.192/26 handle="k8s-pod-network.b33cb0cd471cb1f4889dc2df72719d1ac507c76fefb9340321cd8e58fd4d6f4b" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:27.704930 containerd[1463]: 2025-11-08 00:29:27.669 [INFO][3995] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.194/26] handle="k8s-pod-network.b33cb0cd471cb1f4889dc2df72719d1ac507c76fefb9340321cd8e58fd4d6f4b" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:27.704930 containerd[1463]: 2025-11-08 00:29:27.669 [INFO][3995] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:27.704930 containerd[1463]: 2025-11-08 00:29:27.669 [INFO][3995] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.194/26] IPv6=[] ContainerID="b33cb0cd471cb1f4889dc2df72719d1ac507c76fefb9340321cd8e58fd4d6f4b" HandleID="k8s-pod-network.b33cb0cd471cb1f4889dc2df72719d1ac507c76fefb9340321cd8e58fd4d6f4b" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--kube--controllers--66784f75f9--twnqc-eth0" Nov 8 00:29:27.706221 containerd[1463]: 2025-11-08 00:29:27.672 [INFO][3984] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b33cb0cd471cb1f4889dc2df72719d1ac507c76fefb9340321cd8e58fd4d6f4b" Namespace="calico-system" Pod="calico-kube-controllers-66784f75f9-twnqc" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--kube--controllers--66784f75f9--twnqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--kube--controllers--66784f75f9--twnqc-eth0", GenerateName:"calico-kube-controllers-66784f75f9-", Namespace:"calico-system", SelfLink:"", UID:"e59ce4b6-f87c-444d-abb1-31c4a685274a", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66784f75f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", ContainerID:"", Pod:"calico-kube-controllers-66784f75f9-twnqc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.38.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali37b8e5627c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:27.706221 containerd[1463]: 2025-11-08 00:29:27.672 [INFO][3984] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.194/32] ContainerID="b33cb0cd471cb1f4889dc2df72719d1ac507c76fefb9340321cd8e58fd4d6f4b" Namespace="calico-system" Pod="calico-kube-controllers-66784f75f9-twnqc" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--kube--controllers--66784f75f9--twnqc-eth0" Nov 8 00:29:27.706221 containerd[1463]: 2025-11-08 00:29:27.672 [INFO][3984] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali37b8e5627c5 ContainerID="b33cb0cd471cb1f4889dc2df72719d1ac507c76fefb9340321cd8e58fd4d6f4b" Namespace="calico-system" Pod="calico-kube-controllers-66784f75f9-twnqc" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--kube--controllers--66784f75f9--twnqc-eth0" Nov 8 00:29:27.706221 containerd[1463]: 2025-11-08 00:29:27.678 [INFO][3984] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b33cb0cd471cb1f4889dc2df72719d1ac507c76fefb9340321cd8e58fd4d6f4b" Namespace="calico-system" Pod="calico-kube-controllers-66784f75f9-twnqc" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--kube--controllers--66784f75f9--twnqc-eth0" Nov 8 00:29:27.706221 containerd[1463]: 2025-11-08 00:29:27.681 [INFO][3984] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b33cb0cd471cb1f4889dc2df72719d1ac507c76fefb9340321cd8e58fd4d6f4b" Namespace="calico-system" Pod="calico-kube-controllers-66784f75f9-twnqc" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--kube--controllers--66784f75f9--twnqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--kube--controllers--66784f75f9--twnqc-eth0", GenerateName:"calico-kube-controllers-66784f75f9-", Namespace:"calico-system", SelfLink:"", UID:"e59ce4b6-f87c-444d-abb1-31c4a685274a", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66784f75f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", ContainerID:"b33cb0cd471cb1f4889dc2df72719d1ac507c76fefb9340321cd8e58fd4d6f4b", Pod:"calico-kube-controllers-66784f75f9-twnqc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.38.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali37b8e5627c5", MAC:"0e:e0:27:51:89:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:27.706221 containerd[1463]: 2025-11-08 00:29:27.700 [INFO][3984] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b33cb0cd471cb1f4889dc2df72719d1ac507c76fefb9340321cd8e58fd4d6f4b" Namespace="calico-system" Pod="calico-kube-controllers-66784f75f9-twnqc" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--kube--controllers--66784f75f9--twnqc-eth0" Nov 8 00:29:27.733475 containerd[1463]: time="2025-11-08T00:29:27.733042719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:27.733475 containerd[1463]: time="2025-11-08T00:29:27.733126833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:27.733475 containerd[1463]: time="2025-11-08T00:29:27.733153968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:27.733923 containerd[1463]: time="2025-11-08T00:29:27.733471251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:27.772972 systemd[1]: Started cri-containerd-b33cb0cd471cb1f4889dc2df72719d1ac507c76fefb9340321cd8e58fd4d6f4b.scope - libcontainer container b33cb0cd471cb1f4889dc2df72719d1ac507c76fefb9340321cd8e58fd4d6f4b. Nov 8 00:29:27.837089 containerd[1463]: time="2025-11-08T00:29:27.837035761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66784f75f9-twnqc,Uid:e59ce4b6-f87c-444d-abb1-31c4a685274a,Namespace:calico-system,Attempt:1,} returns sandbox id \"b33cb0cd471cb1f4889dc2df72719d1ac507c76fefb9340321cd8e58fd4d6f4b\"" Nov 8 00:29:27.840512 containerd[1463]: time="2025-11-08T00:29:27.840122924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:29:28.009567 containerd[1463]: time="2025-11-08T00:29:28.009396976Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:28.011847 containerd[1463]: time="2025-11-08T00:29:28.011312127Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:29:28.011847 containerd[1463]: time="2025-11-08T00:29:28.011682188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:29:28.012893 kubelet[2553]: E1108 00:29:28.012236 2553 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:29:28.012893 kubelet[2553]: E1108 00:29:28.012308 2553 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:29:28.012893 kubelet[2553]: E1108 00:29:28.012513 2553 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2mb96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-66784f75f9-twnqc_calico-system(e59ce4b6-f87c-444d-abb1-31c4a685274a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:28.015132 kubelet[2553]: E1108 00:29:28.015078 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66784f75f9-twnqc" podUID="e59ce4b6-f87c-444d-abb1-31c4a685274a" Nov 8 00:29:28.402469 containerd[1463]: time="2025-11-08T00:29:28.401821347Z" level=info msg="StopPodSandbox for \"bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af\"" Nov 8 00:29:28.515477 containerd[1463]: 2025-11-08 00:29:28.461 [INFO][4078] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" Nov 8 00:29:28.515477 containerd[1463]: 2025-11-08 00:29:28.463 [INFO][4078] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" iface="eth0" netns="/var/run/netns/cni-9281190a-199c-4ac3-1be9-658a9790045f" Nov 8 00:29:28.515477 containerd[1463]: 2025-11-08 00:29:28.463 [INFO][4078] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" iface="eth0" netns="/var/run/netns/cni-9281190a-199c-4ac3-1be9-658a9790045f" Nov 8 00:29:28.515477 containerd[1463]: 2025-11-08 00:29:28.463 [INFO][4078] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" iface="eth0" netns="/var/run/netns/cni-9281190a-199c-4ac3-1be9-658a9790045f" Nov 8 00:29:28.515477 containerd[1463]: 2025-11-08 00:29:28.463 [INFO][4078] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" Nov 8 00:29:28.515477 containerd[1463]: 2025-11-08 00:29:28.464 [INFO][4078] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" Nov 8 00:29:28.515477 containerd[1463]: 2025-11-08 00:29:28.497 [INFO][4085] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" HandleID="k8s-pod-network.bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--vbsgj-eth0" Nov 8 00:29:28.515477 containerd[1463]: 2025-11-08 00:29:28.498 [INFO][4085] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:28.515477 containerd[1463]: 2025-11-08 00:29:28.498 [INFO][4085] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:28.515477 containerd[1463]: 2025-11-08 00:29:28.510 [WARNING][4085] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" HandleID="k8s-pod-network.bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--vbsgj-eth0" Nov 8 00:29:28.515477 containerd[1463]: 2025-11-08 00:29:28.510 [INFO][4085] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" HandleID="k8s-pod-network.bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--vbsgj-eth0" Nov 8 00:29:28.515477 containerd[1463]: 2025-11-08 00:29:28.512 [INFO][4085] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:28.515477 containerd[1463]: 2025-11-08 00:29:28.513 [INFO][4078] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" Nov 8 00:29:28.517062 containerd[1463]: time="2025-11-08T00:29:28.515767928Z" level=info msg="TearDown network for sandbox \"bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af\" successfully" Nov 8 00:29:28.517062 containerd[1463]: time="2025-11-08T00:29:28.515808107Z" level=info msg="StopPodSandbox for \"bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af\" returns successfully" Nov 8 00:29:28.517062 containerd[1463]: time="2025-11-08T00:29:28.516781766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vbsgj,Uid:f25962ab-cc66-4ae1-b3d0-2209da78cffc,Namespace:kube-system,Attempt:1,}" Nov 8 00:29:28.523572 systemd[1]: run-netns-cni\x2d9281190a\x2d199c\x2d4ac3\x2d1be9\x2d658a9790045f.mount: Deactivated successfully. Nov 8 00:29:28.687206 systemd-networkd[1371]: cali4e9746f878f: Link UP Nov 8 00:29:28.687704 systemd-networkd[1371]: cali4e9746f878f: Gained carrier Nov 8 00:29:28.710608 containerd[1463]: 2025-11-08 00:29:28.583 [INFO][4091] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:29:28.710608 containerd[1463]: 2025-11-08 00:29:28.598 [INFO][4091] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--vbsgj-eth0 coredns-668d6bf9bc- kube-system f25962ab-cc66-4ae1-b3d0-2209da78cffc 941 0 2025-11-08 00:28:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562 coredns-668d6bf9bc-vbsgj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4e9746f878f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8c99d9f94428d41d2157bbee1181e26c95a723e13ed327360d6fbbf9e1253d8d" Namespace="kube-system" Pod="coredns-668d6bf9bc-vbsgj" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--vbsgj-" Nov 8 00:29:28.710608 containerd[1463]: 2025-11-08 00:29:28.598 [INFO][4091] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8c99d9f94428d41d2157bbee1181e26c95a723e13ed327360d6fbbf9e1253d8d" Namespace="kube-system" Pod="coredns-668d6bf9bc-vbsgj" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--vbsgj-eth0" Nov 8 00:29:28.710608 containerd[1463]: 2025-11-08 00:29:28.637 [INFO][4103] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c99d9f94428d41d2157bbee1181e26c95a723e13ed327360d6fbbf9e1253d8d" HandleID="k8s-pod-network.8c99d9f94428d41d2157bbee1181e26c95a723e13ed327360d6fbbf9e1253d8d" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--vbsgj-eth0" Nov 8 00:29:28.710608 containerd[1463]: 2025-11-08 00:29:28.637 [INFO][4103] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8c99d9f94428d41d2157bbee1181e26c95a723e13ed327360d6fbbf9e1253d8d" HandleID="k8s-pod-network.8c99d9f94428d41d2157bbee1181e26c95a723e13ed327360d6fbbf9e1253d8d" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--vbsgj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f8b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", "pod":"coredns-668d6bf9bc-vbsgj", "timestamp":"2025-11-08 00:29:28.637553713 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:29:28.710608 containerd[1463]: 2025-11-08 00:29:28.637 [INFO][4103] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:28.710608 containerd[1463]: 2025-11-08 00:29:28.638 [INFO][4103] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:28.710608 containerd[1463]: 2025-11-08 00:29:28.638 [INFO][4103] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562' Nov 8 00:29:28.710608 containerd[1463]: 2025-11-08 00:29:28.649 [INFO][4103] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8c99d9f94428d41d2157bbee1181e26c95a723e13ed327360d6fbbf9e1253d8d" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:28.710608 containerd[1463]: 2025-11-08 00:29:28.654 [INFO][4103] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:28.710608 containerd[1463]: 2025-11-08 00:29:28.659 [INFO][4103] ipam/ipam.go 511: Trying affinity for 192.168.38.192/26 host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:28.710608 containerd[1463]: 2025-11-08 00:29:28.662 [INFO][4103] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.192/26 host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:28.710608 containerd[1463]: 2025-11-08 00:29:28.664 [INFO][4103] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.192/26 host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:28.710608 containerd[1463]: 2025-11-08 00:29:28.664 [INFO][4103] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.192/26 handle="k8s-pod-network.8c99d9f94428d41d2157bbee1181e26c95a723e13ed327360d6fbbf9e1253d8d" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:28.710608 containerd[1463]: 2025-11-08 00:29:28.666 [INFO][4103] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8c99d9f94428d41d2157bbee1181e26c95a723e13ed327360d6fbbf9e1253d8d Nov 8 00:29:28.710608 containerd[1463]: 2025-11-08 00:29:28.672 [INFO][4103] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.192/26 handle="k8s-pod-network.8c99d9f94428d41d2157bbee1181e26c95a723e13ed327360d6fbbf9e1253d8d" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:28.710608 containerd[1463]: 2025-11-08 00:29:28.680 [INFO][4103] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.195/26] block=192.168.38.192/26 handle="k8s-pod-network.8c99d9f94428d41d2157bbee1181e26c95a723e13ed327360d6fbbf9e1253d8d" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:28.710608 containerd[1463]: 2025-11-08 00:29:28.680 [INFO][4103] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.195/26] handle="k8s-pod-network.8c99d9f94428d41d2157bbee1181e26c95a723e13ed327360d6fbbf9e1253d8d" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:28.710608 containerd[1463]: 2025-11-08 00:29:28.680 [INFO][4103] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:28.710608 containerd[1463]: 2025-11-08 00:29:28.680 [INFO][4103] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.195/26] IPv6=[] ContainerID="8c99d9f94428d41d2157bbee1181e26c95a723e13ed327360d6fbbf9e1253d8d" HandleID="k8s-pod-network.8c99d9f94428d41d2157bbee1181e26c95a723e13ed327360d6fbbf9e1253d8d" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--vbsgj-eth0" Nov 8 00:29:28.712057 containerd[1463]: 2025-11-08 00:29:28.683 [INFO][4091] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8c99d9f94428d41d2157bbee1181e26c95a723e13ed327360d6fbbf9e1253d8d" Namespace="kube-system" Pod="coredns-668d6bf9bc-vbsgj" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--vbsgj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--vbsgj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f25962ab-cc66-4ae1-b3d0-2209da78cffc", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 28, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", ContainerID:"", Pod:"coredns-668d6bf9bc-vbsgj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e9746f878f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:28.712057 containerd[1463]: 2025-11-08 00:29:28.683 [INFO][4091] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.195/32] ContainerID="8c99d9f94428d41d2157bbee1181e26c95a723e13ed327360d6fbbf9e1253d8d" Namespace="kube-system" Pod="coredns-668d6bf9bc-vbsgj" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--vbsgj-eth0" Nov 8 00:29:28.712057 containerd[1463]: 2025-11-08 00:29:28.683 [INFO][4091] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e9746f878f ContainerID="8c99d9f94428d41d2157bbee1181e26c95a723e13ed327360d6fbbf9e1253d8d" Namespace="kube-system" Pod="coredns-668d6bf9bc-vbsgj" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--vbsgj-eth0" Nov 8 00:29:28.712057 containerd[1463]: 2025-11-08 00:29:28.688 [INFO][4091] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8c99d9f94428d41d2157bbee1181e26c95a723e13ed327360d6fbbf9e1253d8d" Namespace="kube-system" Pod="coredns-668d6bf9bc-vbsgj" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--vbsgj-eth0" Nov 8 00:29:28.712057 containerd[1463]: 2025-11-08 00:29:28.689 [INFO][4091] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8c99d9f94428d41d2157bbee1181e26c95a723e13ed327360d6fbbf9e1253d8d" Namespace="kube-system" Pod="coredns-668d6bf9bc-vbsgj" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--vbsgj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--vbsgj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f25962ab-cc66-4ae1-b3d0-2209da78cffc", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 28, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", ContainerID:"8c99d9f94428d41d2157bbee1181e26c95a723e13ed327360d6fbbf9e1253d8d", Pod:"coredns-668d6bf9bc-vbsgj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e9746f878f", MAC:"62:02:07:d2:1c:22", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:28.712057 containerd[1463]: 2025-11-08 00:29:28.708 [INFO][4091] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8c99d9f94428d41d2157bbee1181e26c95a723e13ed327360d6fbbf9e1253d8d" Namespace="kube-system" Pod="coredns-668d6bf9bc-vbsgj" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--vbsgj-eth0" Nov 8 00:29:28.736253 containerd[1463]: time="2025-11-08T00:29:28.736119571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:28.736253 containerd[1463]: time="2025-11-08T00:29:28.736200510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:28.736806 containerd[1463]: time="2025-11-08T00:29:28.736225990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:28.736806 containerd[1463]: time="2025-11-08T00:29:28.736344287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:28.775013 systemd[1]: Started cri-containerd-8c99d9f94428d41d2157bbee1181e26c95a723e13ed327360d6fbbf9e1253d8d.scope - libcontainer container 8c99d9f94428d41d2157bbee1181e26c95a723e13ed327360d6fbbf9e1253d8d. Nov 8 00:29:28.837580 kubelet[2553]: E1108 00:29:28.836914 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66784f75f9-twnqc" podUID="e59ce4b6-f87c-444d-abb1-31c4a685274a" Nov 8 00:29:28.846048 containerd[1463]: time="2025-11-08T00:29:28.846000685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vbsgj,Uid:f25962ab-cc66-4ae1-b3d0-2209da78cffc,Namespace:kube-system,Attempt:1,} returns sandbox id \"8c99d9f94428d41d2157bbee1181e26c95a723e13ed327360d6fbbf9e1253d8d\"" Nov 8 00:29:28.853625 containerd[1463]: time="2025-11-08T00:29:28.853565040Z" level=info msg="CreateContainer within sandbox \"8c99d9f94428d41d2157bbee1181e26c95a723e13ed327360d6fbbf9e1253d8d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:29:28.879831 containerd[1463]: time="2025-11-08T00:29:28.879767273Z" level=info msg="CreateContainer within sandbox \"8c99d9f94428d41d2157bbee1181e26c95a723e13ed327360d6fbbf9e1253d8d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a896ee5c9fb530abd533920b413ab2db5033b8ba821f74249bfcc9580f7f0353\"" Nov 8 00:29:28.882005 containerd[1463]: time="2025-11-08T00:29:28.881955185Z" level=info msg="StartContainer for \"a896ee5c9fb530abd533920b413ab2db5033b8ba821f74249bfcc9580f7f0353\"" Nov 8 00:29:28.939972 systemd[1]: Started cri-containerd-a896ee5c9fb530abd533920b413ab2db5033b8ba821f74249bfcc9580f7f0353.scope - libcontainer container a896ee5c9fb530abd533920b413ab2db5033b8ba821f74249bfcc9580f7f0353. Nov 8 00:29:28.990258 containerd[1463]: time="2025-11-08T00:29:28.990205476Z" level=info msg="StartContainer for \"a896ee5c9fb530abd533920b413ab2db5033b8ba821f74249bfcc9580f7f0353\" returns successfully" Nov 8 00:29:29.324210 systemd-networkd[1371]: cali37b8e5627c5: Gained IPv6LL Nov 8 00:29:29.405677 containerd[1463]: time="2025-11-08T00:29:29.404308123Z" level=info msg="StopPodSandbox for \"ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1\"" Nov 8 00:29:29.514869 containerd[1463]: 2025-11-08 00:29:29.463 [INFO][4226] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" Nov 8 00:29:29.514869 containerd[1463]: 2025-11-08 00:29:29.464 [INFO][4226] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" iface="eth0" netns="/var/run/netns/cni-11b23bd0-557b-fd82-fc6d-3f996397c80e" Nov 8 00:29:29.514869 containerd[1463]: 2025-11-08 00:29:29.465 [INFO][4226] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" iface="eth0" netns="/var/run/netns/cni-11b23bd0-557b-fd82-fc6d-3f996397c80e" Nov 8 00:29:29.514869 containerd[1463]: 2025-11-08 00:29:29.469 [INFO][4226] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" iface="eth0" netns="/var/run/netns/cni-11b23bd0-557b-fd82-fc6d-3f996397c80e" Nov 8 00:29:29.514869 containerd[1463]: 2025-11-08 00:29:29.469 [INFO][4226] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" Nov 8 00:29:29.514869 containerd[1463]: 2025-11-08 00:29:29.469 [INFO][4226] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" Nov 8 00:29:29.514869 containerd[1463]: 2025-11-08 00:29:29.501 [INFO][4233] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" HandleID="k8s-pod-network.ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--zfhqj-eth0" Nov 8 00:29:29.514869 containerd[1463]: 2025-11-08 00:29:29.501 [INFO][4233] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:29.514869 containerd[1463]: 2025-11-08 00:29:29.501 [INFO][4233] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:29.514869 containerd[1463]: 2025-11-08 00:29:29.509 [WARNING][4233] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" HandleID="k8s-pod-network.ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--zfhqj-eth0" Nov 8 00:29:29.514869 containerd[1463]: 2025-11-08 00:29:29.509 [INFO][4233] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" HandleID="k8s-pod-network.ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--zfhqj-eth0" Nov 8 00:29:29.514869 containerd[1463]: 2025-11-08 00:29:29.511 [INFO][4233] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:29.514869 containerd[1463]: 2025-11-08 00:29:29.513 [INFO][4226] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" Nov 8 00:29:29.516091 containerd[1463]: time="2025-11-08T00:29:29.515073098Z" level=info msg="TearDown network for sandbox \"ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1\" successfully" Nov 8 00:29:29.516091 containerd[1463]: time="2025-11-08T00:29:29.515130393Z" level=info msg="StopPodSandbox for \"ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1\" returns successfully" Nov 8 00:29:29.516256 containerd[1463]: time="2025-11-08T00:29:29.516224265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zfhqj,Uid:3f87c08c-b49d-4506-b03a-99a6bbfdb418,Namespace:kube-system,Attempt:1,}" Nov 8 00:29:29.524936 systemd[1]: run-netns-cni\x2d11b23bd0\x2d557b\x2dfd82\x2dfc6d\x2d3f996397c80e.mount: Deactivated successfully. Nov 8 00:29:29.672217 systemd-networkd[1371]: calidbc6d3df566: Link UP Nov 8 00:29:29.673602 systemd-networkd[1371]: calidbc6d3df566: Gained carrier Nov 8 00:29:29.694273 containerd[1463]: 2025-11-08 00:29:29.575 [INFO][4241] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:29:29.694273 containerd[1463]: 2025-11-08 00:29:29.589 [INFO][4241] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--zfhqj-eth0 coredns-668d6bf9bc- kube-system 3f87c08c-b49d-4506-b03a-99a6bbfdb418 954 0 2025-11-08 00:28:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562 coredns-668d6bf9bc-zfhqj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidbc6d3df566 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7c08764f5a2cd350393be2a341df89a3c1a590c671f6c1344c836c0f878e84e8" Namespace="kube-system" Pod="coredns-668d6bf9bc-zfhqj" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--zfhqj-" Nov 8 00:29:29.694273 containerd[1463]: 2025-11-08 00:29:29.589 [INFO][4241] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7c08764f5a2cd350393be2a341df89a3c1a590c671f6c1344c836c0f878e84e8" Namespace="kube-system" Pod="coredns-668d6bf9bc-zfhqj" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--zfhqj-eth0" Nov 8 00:29:29.694273 containerd[1463]: 2025-11-08 00:29:29.622 [INFO][4252] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c08764f5a2cd350393be2a341df89a3c1a590c671f6c1344c836c0f878e84e8" HandleID="k8s-pod-network.7c08764f5a2cd350393be2a341df89a3c1a590c671f6c1344c836c0f878e84e8" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--zfhqj-eth0" Nov 8 00:29:29.694273 containerd[1463]: 2025-11-08 00:29:29.622 [INFO][4252] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7c08764f5a2cd350393be2a341df89a3c1a590c671f6c1344c836c0f878e84e8" HandleID="k8s-pod-network.7c08764f5a2cd350393be2a341df89a3c1a590c671f6c1344c836c0f878e84e8" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--zfhqj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", "pod":"coredns-668d6bf9bc-zfhqj", "timestamp":"2025-11-08 00:29:29.622105688 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:29:29.694273 containerd[1463]: 2025-11-08 00:29:29.622 [INFO][4252] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:29.694273 containerd[1463]: 2025-11-08 00:29:29.622 [INFO][4252] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:29.694273 containerd[1463]: 2025-11-08 00:29:29.622 [INFO][4252] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562' Nov 8 00:29:29.694273 containerd[1463]: 2025-11-08 00:29:29.631 [INFO][4252] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7c08764f5a2cd350393be2a341df89a3c1a590c671f6c1344c836c0f878e84e8" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:29.694273 containerd[1463]: 2025-11-08 00:29:29.637 [INFO][4252] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:29.694273 containerd[1463]: 2025-11-08 00:29:29.642 [INFO][4252] ipam/ipam.go 511: Trying affinity for 192.168.38.192/26 host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:29.694273 containerd[1463]: 2025-11-08 00:29:29.644 [INFO][4252] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.192/26 host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:29.694273 containerd[1463]: 2025-11-08 00:29:29.647 [INFO][4252] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.192/26 host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:29.694273 containerd[1463]: 2025-11-08 00:29:29.647 [INFO][4252] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.192/26 handle="k8s-pod-network.7c08764f5a2cd350393be2a341df89a3c1a590c671f6c1344c836c0f878e84e8" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:29.694273 containerd[1463]: 2025-11-08 00:29:29.651 [INFO][4252] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7c08764f5a2cd350393be2a341df89a3c1a590c671f6c1344c836c0f878e84e8 Nov 8 00:29:29.694273 containerd[1463]: 2025-11-08 00:29:29.657 [INFO][4252] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.192/26 handle="k8s-pod-network.7c08764f5a2cd350393be2a341df89a3c1a590c671f6c1344c836c0f878e84e8" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:29.694273 containerd[1463]: 2025-11-08 00:29:29.664 [INFO][4252] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.196/26] block=192.168.38.192/26 handle="k8s-pod-network.7c08764f5a2cd350393be2a341df89a3c1a590c671f6c1344c836c0f878e84e8" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:29.694273 containerd[1463]: 2025-11-08 00:29:29.665 [INFO][4252] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.196/26] handle="k8s-pod-network.7c08764f5a2cd350393be2a341df89a3c1a590c671f6c1344c836c0f878e84e8" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:29.694273 containerd[1463]: 2025-11-08 00:29:29.665 [INFO][4252] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:29.694273 containerd[1463]: 2025-11-08 00:29:29.665 [INFO][4252] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.196/26] IPv6=[] ContainerID="7c08764f5a2cd350393be2a341df89a3c1a590c671f6c1344c836c0f878e84e8" HandleID="k8s-pod-network.7c08764f5a2cd350393be2a341df89a3c1a590c671f6c1344c836c0f878e84e8" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--zfhqj-eth0" Nov 8 00:29:29.695468 containerd[1463]: 2025-11-08 00:29:29.668 [INFO][4241] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7c08764f5a2cd350393be2a341df89a3c1a590c671f6c1344c836c0f878e84e8" Namespace="kube-system" Pod="coredns-668d6bf9bc-zfhqj" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--zfhqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--zfhqj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3f87c08c-b49d-4506-b03a-99a6bbfdb418", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 28, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", ContainerID:"", Pod:"coredns-668d6bf9bc-zfhqj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidbc6d3df566", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:29.695468 containerd[1463]: 2025-11-08 00:29:29.668 [INFO][4241] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.196/32] ContainerID="7c08764f5a2cd350393be2a341df89a3c1a590c671f6c1344c836c0f878e84e8" Namespace="kube-system" Pod="coredns-668d6bf9bc-zfhqj" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--zfhqj-eth0" Nov 8 00:29:29.695468 containerd[1463]: 2025-11-08 00:29:29.668 [INFO][4241] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidbc6d3df566 ContainerID="7c08764f5a2cd350393be2a341df89a3c1a590c671f6c1344c836c0f878e84e8" Namespace="kube-system" Pod="coredns-668d6bf9bc-zfhqj" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--zfhqj-eth0" Nov 8 00:29:29.695468 containerd[1463]: 2025-11-08 00:29:29.673 [INFO][4241] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c08764f5a2cd350393be2a341df89a3c1a590c671f6c1344c836c0f878e84e8" Namespace="kube-system" Pod="coredns-668d6bf9bc-zfhqj" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--zfhqj-eth0" Nov 8 00:29:29.695468 containerd[1463]: 2025-11-08 00:29:29.674 [INFO][4241] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7c08764f5a2cd350393be2a341df89a3c1a590c671f6c1344c836c0f878e84e8" Namespace="kube-system" Pod="coredns-668d6bf9bc-zfhqj" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--zfhqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--zfhqj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3f87c08c-b49d-4506-b03a-99a6bbfdb418", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 28, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", ContainerID:"7c08764f5a2cd350393be2a341df89a3c1a590c671f6c1344c836c0f878e84e8", Pod:"coredns-668d6bf9bc-zfhqj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidbc6d3df566", MAC:"fe:e1:4b:f3:11:6b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:29.695468 containerd[1463]: 2025-11-08 00:29:29.690 [INFO][4241] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7c08764f5a2cd350393be2a341df89a3c1a590c671f6c1344c836c0f878e84e8" Namespace="kube-system" Pod="coredns-668d6bf9bc-zfhqj" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--zfhqj-eth0" Nov 8 00:29:29.721063 containerd[1463]: time="2025-11-08T00:29:29.720904834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:29.721063 containerd[1463]: time="2025-11-08T00:29:29.720982573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:29.721063 containerd[1463]: time="2025-11-08T00:29:29.721009101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:29.721446 containerd[1463]: time="2025-11-08T00:29:29.721150224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:29.761923 systemd[1]: Started cri-containerd-7c08764f5a2cd350393be2a341df89a3c1a590c671f6c1344c836c0f878e84e8.scope - libcontainer container 7c08764f5a2cd350393be2a341df89a3c1a590c671f6c1344c836c0f878e84e8. Nov 8 00:29:29.825760 containerd[1463]: time="2025-11-08T00:29:29.825691379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zfhqj,Uid:3f87c08c-b49d-4506-b03a-99a6bbfdb418,Namespace:kube-system,Attempt:1,} returns sandbox id \"7c08764f5a2cd350393be2a341df89a3c1a590c671f6c1344c836c0f878e84e8\"" Nov 8 00:29:29.831755 containerd[1463]: time="2025-11-08T00:29:29.831680565Z" level=info msg="CreateContainer within sandbox \"7c08764f5a2cd350393be2a341df89a3c1a590c671f6c1344c836c0f878e84e8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:29:29.848957 kubelet[2553]: E1108 00:29:29.848491 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66784f75f9-twnqc" podUID="e59ce4b6-f87c-444d-abb1-31c4a685274a" Nov 8 00:29:29.857232 containerd[1463]: time="2025-11-08T00:29:29.856998536Z" level=info msg="CreateContainer within sandbox \"7c08764f5a2cd350393be2a341df89a3c1a590c671f6c1344c836c0f878e84e8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"febd88c44ceb1e3083c49b614420995406f467615083d1b51050addaf0df7cca\"" Nov 8 00:29:29.857790 containerd[1463]: time="2025-11-08T00:29:29.857747494Z" level=info msg="StartContainer for \"febd88c44ceb1e3083c49b614420995406f467615083d1b51050addaf0df7cca\"" Nov 8 00:29:29.898237 kubelet[2553]: I1108 00:29:29.898079 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vbsgj" podStartSLOduration=42.898053215 podStartE2EDuration="42.898053215s" podCreationTimestamp="2025-11-08 00:28:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:29:29.870173574 +0000 UTC m=+48.614608437" watchObservedRunningTime="2025-11-08 00:29:29.898053215 +0000 UTC m=+48.642488077" Nov 8 00:29:29.927982 systemd[1]: Started cri-containerd-febd88c44ceb1e3083c49b614420995406f467615083d1b51050addaf0df7cca.scope - libcontainer container febd88c44ceb1e3083c49b614420995406f467615083d1b51050addaf0df7cca. Nov 8 00:29:29.986465 containerd[1463]: time="2025-11-08T00:29:29.986312967Z" level=info msg="StartContainer for \"febd88c44ceb1e3083c49b614420995406f467615083d1b51050addaf0df7cca\" returns successfully" Nov 8 00:29:30.283964 systemd-networkd[1371]: cali4e9746f878f: Gained IPv6LL Nov 8 00:29:30.402308 containerd[1463]: time="2025-11-08T00:29:30.402256383Z" level=info msg="StopPodSandbox for \"097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f\"" Nov 8 00:29:30.567028 containerd[1463]: 2025-11-08 00:29:30.488 [INFO][4374] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" Nov 8 00:29:30.567028 containerd[1463]: 2025-11-08 00:29:30.488 [INFO][4374] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" iface="eth0" netns="/var/run/netns/cni-a5c2f468-fafa-5a98-d1d6-6136a8d8c429" Nov 8 00:29:30.567028 containerd[1463]: 2025-11-08 00:29:30.489 [INFO][4374] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" iface="eth0" netns="/var/run/netns/cni-a5c2f468-fafa-5a98-d1d6-6136a8d8c429" Nov 8 00:29:30.567028 containerd[1463]: 2025-11-08 00:29:30.489 [INFO][4374] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" iface="eth0" netns="/var/run/netns/cni-a5c2f468-fafa-5a98-d1d6-6136a8d8c429" Nov 8 00:29:30.567028 containerd[1463]: 2025-11-08 00:29:30.489 [INFO][4374] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" Nov 8 00:29:30.567028 containerd[1463]: 2025-11-08 00:29:30.490 [INFO][4374] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" Nov 8 00:29:30.567028 containerd[1463]: 2025-11-08 00:29:30.546 [INFO][4381] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" HandleID="k8s-pod-network.097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-goldmane--666569f655--b28gg-eth0" Nov 8 00:29:30.567028 containerd[1463]: 2025-11-08 00:29:30.546 [INFO][4381] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:30.567028 containerd[1463]: 2025-11-08 00:29:30.546 [INFO][4381] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:30.567028 containerd[1463]: 2025-11-08 00:29:30.558 [WARNING][4381] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" HandleID="k8s-pod-network.097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-goldmane--666569f655--b28gg-eth0" Nov 8 00:29:30.567028 containerd[1463]: 2025-11-08 00:29:30.558 [INFO][4381] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" HandleID="k8s-pod-network.097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-goldmane--666569f655--b28gg-eth0" Nov 8 00:29:30.567028 containerd[1463]: 2025-11-08 00:29:30.561 [INFO][4381] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:30.567028 containerd[1463]: 2025-11-08 00:29:30.564 [INFO][4374] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" Nov 8 00:29:30.568658 containerd[1463]: time="2025-11-08T00:29:30.567210296Z" level=info msg="TearDown network for sandbox \"097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f\" successfully" Nov 8 00:29:30.568658 containerd[1463]: time="2025-11-08T00:29:30.567295832Z" level=info msg="StopPodSandbox for \"097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f\" returns successfully" Nov 8 00:29:30.571105 containerd[1463]: time="2025-11-08T00:29:30.571008044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-b28gg,Uid:418020d8-f003-4c3d-bcf6-3368810f5d40,Namespace:calico-system,Attempt:1,}" Nov 8 00:29:30.585705 systemd[1]: run-netns-cni\x2da5c2f468\x2dfafa\x2d5a98\x2dd1d6\x2d6136a8d8c429.mount: Deactivated successfully. Nov 8 00:29:30.766418 systemd-networkd[1371]: calic7ace19a124: Link UP Nov 8 00:29:30.767570 systemd-networkd[1371]: calic7ace19a124: Gained carrier Nov 8 00:29:30.791610 containerd[1463]: 2025-11-08 00:29:30.658 [INFO][4391] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:29:30.791610 containerd[1463]: 2025-11-08 00:29:30.687 [INFO][4391] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-goldmane--666569f655--b28gg-eth0 goldmane-666569f655- calico-system 418020d8-f003-4c3d-bcf6-3368810f5d40 976 0 2025-11-08 00:29:03 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562 goldmane-666569f655-b28gg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic7ace19a124 [] [] }} ContainerID="38b96ffe5f50ab5dfefa8b9adf8490e4a6ea1893813ed1686c17616a58c82bfd" Namespace="calico-system" Pod="goldmane-666569f655-b28gg" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-goldmane--666569f655--b28gg-" Nov 8 00:29:30.791610 containerd[1463]: 2025-11-08 00:29:30.687 [INFO][4391] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="38b96ffe5f50ab5dfefa8b9adf8490e4a6ea1893813ed1686c17616a58c82bfd" Namespace="calico-system" Pod="goldmane-666569f655-b28gg" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-goldmane--666569f655--b28gg-eth0" Nov 8 00:29:30.791610 containerd[1463]: 2025-11-08 00:29:30.719 [INFO][4399] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="38b96ffe5f50ab5dfefa8b9adf8490e4a6ea1893813ed1686c17616a58c82bfd" HandleID="k8s-pod-network.38b96ffe5f50ab5dfefa8b9adf8490e4a6ea1893813ed1686c17616a58c82bfd" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-goldmane--666569f655--b28gg-eth0" Nov 8 00:29:30.791610 containerd[1463]: 2025-11-08 00:29:30.720 [INFO][4399] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="38b96ffe5f50ab5dfefa8b9adf8490e4a6ea1893813ed1686c17616a58c82bfd" HandleID="k8s-pod-network.38b96ffe5f50ab5dfefa8b9adf8490e4a6ea1893813ed1686c17616a58c82bfd" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-goldmane--666569f655--b28gg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", "pod":"goldmane-666569f655-b28gg", "timestamp":"2025-11-08 00:29:30.719904902 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:29:30.791610 containerd[1463]: 2025-11-08 00:29:30.720 [INFO][4399] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:30.791610 containerd[1463]: 2025-11-08 00:29:30.720 [INFO][4399] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:30.791610 containerd[1463]: 2025-11-08 00:29:30.720 [INFO][4399] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562' Nov 8 00:29:30.791610 containerd[1463]: 2025-11-08 00:29:30.730 [INFO][4399] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.38b96ffe5f50ab5dfefa8b9adf8490e4a6ea1893813ed1686c17616a58c82bfd" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:30.791610 containerd[1463]: 2025-11-08 00:29:30.735 [INFO][4399] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:30.791610 containerd[1463]: 2025-11-08 00:29:30.741 [INFO][4399] ipam/ipam.go 511: Trying affinity for 192.168.38.192/26 host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:30.791610 containerd[1463]: 2025-11-08 00:29:30.743 [INFO][4399] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.192/26 host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:30.791610 containerd[1463]: 2025-11-08 00:29:30.745 [INFO][4399] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.192/26 host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:30.791610 containerd[1463]: 2025-11-08 00:29:30.746 [INFO][4399] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.192/26 handle="k8s-pod-network.38b96ffe5f50ab5dfefa8b9adf8490e4a6ea1893813ed1686c17616a58c82bfd" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:30.791610 containerd[1463]: 2025-11-08 00:29:30.747 [INFO][4399] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.38b96ffe5f50ab5dfefa8b9adf8490e4a6ea1893813ed1686c17616a58c82bfd Nov 8 00:29:30.791610 containerd[1463]: 2025-11-08 00:29:30.751 [INFO][4399] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.192/26 handle="k8s-pod-network.38b96ffe5f50ab5dfefa8b9adf8490e4a6ea1893813ed1686c17616a58c82bfd" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:30.791610 containerd[1463]: 2025-11-08 00:29:30.760 [INFO][4399] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.197/26] block=192.168.38.192/26 handle="k8s-pod-network.38b96ffe5f50ab5dfefa8b9adf8490e4a6ea1893813ed1686c17616a58c82bfd" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:30.791610 containerd[1463]: 2025-11-08 00:29:30.760 [INFO][4399] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.197/26] handle="k8s-pod-network.38b96ffe5f50ab5dfefa8b9adf8490e4a6ea1893813ed1686c17616a58c82bfd" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:30.791610 containerd[1463]: 2025-11-08 00:29:30.760 [INFO][4399] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:30.791610 containerd[1463]: 2025-11-08 00:29:30.760 [INFO][4399] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.197/26] IPv6=[] ContainerID="38b96ffe5f50ab5dfefa8b9adf8490e4a6ea1893813ed1686c17616a58c82bfd" HandleID="k8s-pod-network.38b96ffe5f50ab5dfefa8b9adf8490e4a6ea1893813ed1686c17616a58c82bfd" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-goldmane--666569f655--b28gg-eth0" Nov 8 00:29:30.792981 containerd[1463]: 2025-11-08 00:29:30.763 [INFO][4391] cni-plugin/k8s.go 418: Populated endpoint ContainerID="38b96ffe5f50ab5dfefa8b9adf8490e4a6ea1893813ed1686c17616a58c82bfd" Namespace="calico-system" Pod="goldmane-666569f655-b28gg" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-goldmane--666569f655--b28gg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-goldmane--666569f655--b28gg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"418020d8-f003-4c3d-bcf6-3368810f5d40", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", ContainerID:"", Pod:"goldmane-666569f655-b28gg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.38.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic7ace19a124", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:30.792981 containerd[1463]: 2025-11-08 00:29:30.763 [INFO][4391] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.197/32] ContainerID="38b96ffe5f50ab5dfefa8b9adf8490e4a6ea1893813ed1686c17616a58c82bfd" Namespace="calico-system" Pod="goldmane-666569f655-b28gg" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-goldmane--666569f655--b28gg-eth0" Nov 8 00:29:30.792981 containerd[1463]: 2025-11-08 00:29:30.763 [INFO][4391] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic7ace19a124 ContainerID="38b96ffe5f50ab5dfefa8b9adf8490e4a6ea1893813ed1686c17616a58c82bfd" Namespace="calico-system" Pod="goldmane-666569f655-b28gg" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-goldmane--666569f655--b28gg-eth0" Nov 8 00:29:30.792981 containerd[1463]: 2025-11-08 00:29:30.767 [INFO][4391] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="38b96ffe5f50ab5dfefa8b9adf8490e4a6ea1893813ed1686c17616a58c82bfd" Namespace="calico-system" Pod="goldmane-666569f655-b28gg" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-goldmane--666569f655--b28gg-eth0" Nov 8 00:29:30.792981 containerd[1463]: 2025-11-08 00:29:30.769 [INFO][4391] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="38b96ffe5f50ab5dfefa8b9adf8490e4a6ea1893813ed1686c17616a58c82bfd" Namespace="calico-system" Pod="goldmane-666569f655-b28gg" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-goldmane--666569f655--b28gg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-goldmane--666569f655--b28gg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"418020d8-f003-4c3d-bcf6-3368810f5d40", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", ContainerID:"38b96ffe5f50ab5dfefa8b9adf8490e4a6ea1893813ed1686c17616a58c82bfd", Pod:"goldmane-666569f655-b28gg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.38.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic7ace19a124", MAC:"da:ba:b1:d5:91:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:30.792981 containerd[1463]: 2025-11-08 00:29:30.787 [INFO][4391] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="38b96ffe5f50ab5dfefa8b9adf8490e4a6ea1893813ed1686c17616a58c82bfd" Namespace="calico-system" Pod="goldmane-666569f655-b28gg" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-goldmane--666569f655--b28gg-eth0" Nov 8 00:29:30.829916 containerd[1463]: time="2025-11-08T00:29:30.827467445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:30.829916 containerd[1463]: time="2025-11-08T00:29:30.827832905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:30.829916 containerd[1463]: time="2025-11-08T00:29:30.827915617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:30.829916 containerd[1463]: time="2025-11-08T00:29:30.828107881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:30.883226 systemd[1]: Started cri-containerd-38b96ffe5f50ab5dfefa8b9adf8490e4a6ea1893813ed1686c17616a58c82bfd.scope - libcontainer container 38b96ffe5f50ab5dfefa8b9adf8490e4a6ea1893813ed1686c17616a58c82bfd. Nov 8 00:29:30.989596 containerd[1463]: time="2025-11-08T00:29:30.989473695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-b28gg,Uid:418020d8-f003-4c3d-bcf6-3368810f5d40,Namespace:calico-system,Attempt:1,} returns sandbox id \"38b96ffe5f50ab5dfefa8b9adf8490e4a6ea1893813ed1686c17616a58c82bfd\"" Nov 8 00:29:30.994807 containerd[1463]: time="2025-11-08T00:29:30.994442595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:29:31.186136 containerd[1463]: time="2025-11-08T00:29:31.185890227Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:31.187981 containerd[1463]: time="2025-11-08T00:29:31.187771116Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:29:31.187981 containerd[1463]: time="2025-11-08T00:29:31.187903331Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:29:31.188425 kubelet[2553]: E1108 00:29:31.188361 2553 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:29:31.189124 kubelet[2553]: E1108 00:29:31.188442 2553 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:29:31.189124 kubelet[2553]: E1108 00:29:31.188651 2553 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jzv5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-b28gg_calico-system(418020d8-f003-4c3d-bcf6-3368810f5d40): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:31.190898 kubelet[2553]: E1108 00:29:31.190842 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-b28gg" podUID="418020d8-f003-4c3d-bcf6-3368810f5d40" Nov 8 00:29:31.243944 systemd-networkd[1371]: calidbc6d3df566: Gained IPv6LL Nov 8 00:29:31.405846 containerd[1463]: time="2025-11-08T00:29:31.404248777Z" level=info msg="StopPodSandbox for \"8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454\"" Nov 8 00:29:31.406183 containerd[1463]: time="2025-11-08T00:29:31.404266785Z" level=info msg="StopPodSandbox for \"aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b\"" Nov 8 00:29:31.538462 kubelet[2553]: I1108 00:29:31.538059 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zfhqj" podStartSLOduration=44.538031032 podStartE2EDuration="44.538031032s" podCreationTimestamp="2025-11-08 00:28:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:29:30.885495317 +0000 UTC m=+49.629930181" watchObservedRunningTime="2025-11-08 00:29:31.538031032 +0000 UTC m=+50.282465895" Nov 8 00:29:31.639980 containerd[1463]: 2025-11-08 00:29:31.545 [INFO][4479] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" Nov 8 00:29:31.639980 containerd[1463]: 2025-11-08 00:29:31.546 [INFO][4479] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" iface="eth0" netns="/var/run/netns/cni-1593c892-b30d-6c86-b38b-ed76cecae414" Nov 8 00:29:31.639980 containerd[1463]: 2025-11-08 00:29:31.546 [INFO][4479] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" iface="eth0" netns="/var/run/netns/cni-1593c892-b30d-6c86-b38b-ed76cecae414" Nov 8 00:29:31.639980 containerd[1463]: 2025-11-08 00:29:31.546 [INFO][4479] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" iface="eth0" netns="/var/run/netns/cni-1593c892-b30d-6c86-b38b-ed76cecae414" Nov 8 00:29:31.639980 containerd[1463]: 2025-11-08 00:29:31.546 [INFO][4479] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" Nov 8 00:29:31.639980 containerd[1463]: 2025-11-08 00:29:31.546 [INFO][4479] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" Nov 8 00:29:31.639980 containerd[1463]: 2025-11-08 00:29:31.619 [INFO][4505] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" HandleID="k8s-pod-network.8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--2lhrt-eth0" Nov 8 00:29:31.639980 containerd[1463]: 2025-11-08 00:29:31.621 [INFO][4505] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:31.639980 containerd[1463]: 2025-11-08 00:29:31.621 [INFO][4505] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:31.639980 containerd[1463]: 2025-11-08 00:29:31.632 [WARNING][4505] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" HandleID="k8s-pod-network.8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--2lhrt-eth0" Nov 8 00:29:31.639980 containerd[1463]: 2025-11-08 00:29:31.633 [INFO][4505] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" HandleID="k8s-pod-network.8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--2lhrt-eth0" Nov 8 00:29:31.639980 containerd[1463]: 2025-11-08 00:29:31.635 [INFO][4505] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:31.639980 containerd[1463]: 2025-11-08 00:29:31.638 [INFO][4479] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" Nov 8 00:29:31.641078 containerd[1463]: time="2025-11-08T00:29:31.640877924Z" level=info msg="TearDown network for sandbox \"8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454\" successfully" Nov 8 00:29:31.641078 containerd[1463]: time="2025-11-08T00:29:31.640919821Z" level=info msg="StopPodSandbox for \"8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454\" returns successfully" Nov 8 00:29:31.643089 containerd[1463]: time="2025-11-08T00:29:31.643043563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-969f74cdf-2lhrt,Uid:06ee44fb-81f0-4173-813e-506c57500250,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:29:31.655635 systemd[1]: run-netns-cni\x2d1593c892\x2db30d\x2d6c86\x2db38b\x2ded76cecae414.mount: Deactivated successfully. Nov 8 00:29:31.667157 containerd[1463]: 2025-11-08 00:29:31.554 [INFO][4485] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" Nov 8 00:29:31.667157 containerd[1463]: 2025-11-08 00:29:31.555 [INFO][4485] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" iface="eth0" netns="/var/run/netns/cni-1519499f-0a12-24fa-e671-94a044d98497" Nov 8 00:29:31.667157 containerd[1463]: 2025-11-08 00:29:31.555 [INFO][4485] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" iface="eth0" netns="/var/run/netns/cni-1519499f-0a12-24fa-e671-94a044d98497" Nov 8 00:29:31.667157 containerd[1463]: 2025-11-08 00:29:31.555 [INFO][4485] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" iface="eth0" netns="/var/run/netns/cni-1519499f-0a12-24fa-e671-94a044d98497" Nov 8 00:29:31.667157 containerd[1463]: 2025-11-08 00:29:31.555 [INFO][4485] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" Nov 8 00:29:31.667157 containerd[1463]: 2025-11-08 00:29:31.555 [INFO][4485] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" Nov 8 00:29:31.667157 containerd[1463]: 2025-11-08 00:29:31.632 [INFO][4510] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" HandleID="k8s-pod-network.aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--w6w2r-eth0" Nov 8 00:29:31.667157 containerd[1463]: 2025-11-08 00:29:31.632 [INFO][4510] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:31.667157 containerd[1463]: 2025-11-08 00:29:31.635 [INFO][4510] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:31.667157 containerd[1463]: 2025-11-08 00:29:31.657 [WARNING][4510] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" HandleID="k8s-pod-network.aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--w6w2r-eth0" Nov 8 00:29:31.667157 containerd[1463]: 2025-11-08 00:29:31.657 [INFO][4510] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" HandleID="k8s-pod-network.aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--w6w2r-eth0" Nov 8 00:29:31.667157 containerd[1463]: 2025-11-08 00:29:31.660 [INFO][4510] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:31.667157 containerd[1463]: 2025-11-08 00:29:31.664 [INFO][4485] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" Nov 8 00:29:31.667157 containerd[1463]: time="2025-11-08T00:29:31.666846786Z" level=info msg="TearDown network for sandbox \"aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b\" successfully" Nov 8 00:29:31.667157 containerd[1463]: time="2025-11-08T00:29:31.666884927Z" level=info msg="StopPodSandbox for \"aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b\" returns successfully" Nov 8 00:29:31.671080 containerd[1463]: time="2025-11-08T00:29:31.669596910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-969f74cdf-w6w2r,Uid:f4df0bd6-9275-4ffb-bc86-7dbb94791082,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:29:31.679434 systemd[1]: run-netns-cni\x2d1519499f\x2d0a12\x2d24fa\x2de671\x2d94a044d98497.mount: Deactivated successfully. Nov 8 00:29:31.864795 kubelet[2553]: E1108 00:29:31.864005 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-b28gg" podUID="418020d8-f003-4c3d-bcf6-3368810f5d40" Nov 8 00:29:32.043137 systemd-networkd[1371]: cali33ffa381387: Link UP Nov 8 00:29:32.045876 systemd-networkd[1371]: cali33ffa381387: Gained carrier Nov 8 00:29:32.083841 containerd[1463]: 2025-11-08 00:29:31.828 [INFO][4526] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:29:32.083841 containerd[1463]: 2025-11-08 00:29:31.861 [INFO][4526] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--w6w2r-eth0 calico-apiserver-969f74cdf- calico-apiserver f4df0bd6-9275-4ffb-bc86-7dbb94791082 993 0 2025-11-08 00:28:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:969f74cdf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562 calico-apiserver-969f74cdf-w6w2r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali33ffa381387 [] [] }} ContainerID="429ee88324d966808a318f8b0fecca2c6f43cbb48f8e6ee31ede2f3e2e01cb63" Namespace="calico-apiserver" Pod="calico-apiserver-969f74cdf-w6w2r" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--w6w2r-" Nov 8 00:29:32.083841 containerd[1463]: 2025-11-08 00:29:31.863 [INFO][4526] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="429ee88324d966808a318f8b0fecca2c6f43cbb48f8e6ee31ede2f3e2e01cb63" Namespace="calico-apiserver" Pod="calico-apiserver-969f74cdf-w6w2r" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--w6w2r-eth0" Nov 8 00:29:32.083841 containerd[1463]: 2025-11-08 00:29:31.967 [INFO][4551] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="429ee88324d966808a318f8b0fecca2c6f43cbb48f8e6ee31ede2f3e2e01cb63" HandleID="k8s-pod-network.429ee88324d966808a318f8b0fecca2c6f43cbb48f8e6ee31ede2f3e2e01cb63" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--w6w2r-eth0" Nov 8 00:29:32.083841 containerd[1463]: 2025-11-08 00:29:31.972 [INFO][4551] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="429ee88324d966808a318f8b0fecca2c6f43cbb48f8e6ee31ede2f3e2e01cb63" HandleID="k8s-pod-network.429ee88324d966808a318f8b0fecca2c6f43cbb48f8e6ee31ede2f3e2e01cb63" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--w6w2r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003559d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", "pod":"calico-apiserver-969f74cdf-w6w2r", "timestamp":"2025-11-08 00:29:31.967790045 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:29:32.083841 containerd[1463]: 2025-11-08 00:29:31.972 [INFO][4551] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:32.083841 containerd[1463]: 2025-11-08 00:29:31.972 [INFO][4551] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:32.083841 containerd[1463]: 2025-11-08 00:29:31.972 [INFO][4551] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562' Nov 8 00:29:32.083841 containerd[1463]: 2025-11-08 00:29:31.985 [INFO][4551] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.429ee88324d966808a318f8b0fecca2c6f43cbb48f8e6ee31ede2f3e2e01cb63" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:32.083841 containerd[1463]: 2025-11-08 00:29:31.992 [INFO][4551] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:32.083841 containerd[1463]: 2025-11-08 00:29:32.001 [INFO][4551] ipam/ipam.go 511: Trying affinity for 192.168.38.192/26 host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:32.083841 containerd[1463]: 2025-11-08 00:29:32.004 [INFO][4551] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.192/26 host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:32.083841 containerd[1463]: 2025-11-08 00:29:32.008 [INFO][4551] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.192/26 host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:32.083841 containerd[1463]: 2025-11-08 00:29:32.009 [INFO][4551] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.192/26 handle="k8s-pod-network.429ee88324d966808a318f8b0fecca2c6f43cbb48f8e6ee31ede2f3e2e01cb63" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:32.083841 containerd[1463]: 2025-11-08 00:29:32.011 [INFO][4551] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.429ee88324d966808a318f8b0fecca2c6f43cbb48f8e6ee31ede2f3e2e01cb63 Nov 8 00:29:32.083841 containerd[1463]: 2025-11-08 00:29:32.019 [INFO][4551] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.192/26 handle="k8s-pod-network.429ee88324d966808a318f8b0fecca2c6f43cbb48f8e6ee31ede2f3e2e01cb63" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:32.083841 containerd[1463]: 2025-11-08 00:29:32.030 [INFO][4551] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.198/26] block=192.168.38.192/26 handle="k8s-pod-network.429ee88324d966808a318f8b0fecca2c6f43cbb48f8e6ee31ede2f3e2e01cb63" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:32.083841 containerd[1463]: 2025-11-08 00:29:32.031 [INFO][4551] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.198/26] handle="k8s-pod-network.429ee88324d966808a318f8b0fecca2c6f43cbb48f8e6ee31ede2f3e2e01cb63" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:32.083841 containerd[1463]: 2025-11-08 00:29:32.031 [INFO][4551] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:32.083841 containerd[1463]: 2025-11-08 00:29:32.031 [INFO][4551] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.198/26] IPv6=[] ContainerID="429ee88324d966808a318f8b0fecca2c6f43cbb48f8e6ee31ede2f3e2e01cb63" HandleID="k8s-pod-network.429ee88324d966808a318f8b0fecca2c6f43cbb48f8e6ee31ede2f3e2e01cb63" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--w6w2r-eth0" Nov 8 00:29:32.086372 containerd[1463]: 2025-11-08 00:29:32.035 [INFO][4526] cni-plugin/k8s.go 418: Populated endpoint ContainerID="429ee88324d966808a318f8b0fecca2c6f43cbb48f8e6ee31ede2f3e2e01cb63" Namespace="calico-apiserver" Pod="calico-apiserver-969f74cdf-w6w2r" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--w6w2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--w6w2r-eth0", GenerateName:"calico-apiserver-969f74cdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"f4df0bd6-9275-4ffb-bc86-7dbb94791082", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 28, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"969f74cdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", ContainerID:"", Pod:"calico-apiserver-969f74cdf-w6w2r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali33ffa381387", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:32.086372 containerd[1463]: 2025-11-08 00:29:32.036 [INFO][4526] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.198/32] ContainerID="429ee88324d966808a318f8b0fecca2c6f43cbb48f8e6ee31ede2f3e2e01cb63" Namespace="calico-apiserver" Pod="calico-apiserver-969f74cdf-w6w2r" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--w6w2r-eth0" Nov 8 00:29:32.086372 containerd[1463]: 2025-11-08 00:29:32.036 [INFO][4526] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali33ffa381387 ContainerID="429ee88324d966808a318f8b0fecca2c6f43cbb48f8e6ee31ede2f3e2e01cb63" Namespace="calico-apiserver" Pod="calico-apiserver-969f74cdf-w6w2r" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--w6w2r-eth0" Nov 8 00:29:32.086372 containerd[1463]: 2025-11-08 00:29:32.045 [INFO][4526] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="429ee88324d966808a318f8b0fecca2c6f43cbb48f8e6ee31ede2f3e2e01cb63" Namespace="calico-apiserver" Pod="calico-apiserver-969f74cdf-w6w2r" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--w6w2r-eth0" Nov 8 00:29:32.086372 containerd[1463]: 2025-11-08 00:29:32.047 [INFO][4526] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="429ee88324d966808a318f8b0fecca2c6f43cbb48f8e6ee31ede2f3e2e01cb63" Namespace="calico-apiserver" Pod="calico-apiserver-969f74cdf-w6w2r" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--w6w2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--w6w2r-eth0", GenerateName:"calico-apiserver-969f74cdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"f4df0bd6-9275-4ffb-bc86-7dbb94791082", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 28, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"969f74cdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", ContainerID:"429ee88324d966808a318f8b0fecca2c6f43cbb48f8e6ee31ede2f3e2e01cb63", Pod:"calico-apiserver-969f74cdf-w6w2r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali33ffa381387", MAC:"e6:dc:3d:7b:05:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:32.086372 containerd[1463]: 2025-11-08 00:29:32.077 [INFO][4526] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="429ee88324d966808a318f8b0fecca2c6f43cbb48f8e6ee31ede2f3e2e01cb63" Namespace="calico-apiserver" Pod="calico-apiserver-969f74cdf-w6w2r" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--w6w2r-eth0" Nov 8 00:29:32.133899 containerd[1463]: time="2025-11-08T00:29:32.132051219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:32.133899 containerd[1463]: time="2025-11-08T00:29:32.132144138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:32.133899 containerd[1463]: time="2025-11-08T00:29:32.132190315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:32.133899 containerd[1463]: time="2025-11-08T00:29:32.132349981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:32.189825 systemd-networkd[1371]: cali3aba865132c: Link UP Nov 8 00:29:32.209811 systemd-networkd[1371]: cali3aba865132c: Gained carrier Nov 8 00:29:32.210034 systemd[1]: Started cri-containerd-429ee88324d966808a318f8b0fecca2c6f43cbb48f8e6ee31ede2f3e2e01cb63.scope - libcontainer container 429ee88324d966808a318f8b0fecca2c6f43cbb48f8e6ee31ede2f3e2e01cb63. Nov 8 00:29:32.241157 containerd[1463]: 2025-11-08 00:29:31.804 [INFO][4519] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:29:32.241157 containerd[1463]: 2025-11-08 00:29:31.833 [INFO][4519] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--2lhrt-eth0 calico-apiserver-969f74cdf- calico-apiserver 06ee44fb-81f0-4173-813e-506c57500250 992 0 2025-11-08 00:28:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:969f74cdf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562 calico-apiserver-969f74cdf-2lhrt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3aba865132c [] [] }} ContainerID="cb8cc69e36bb17f17747a35015fe9d1e562d4505e1013ddc87f57de7813e1513" Namespace="calico-apiserver" Pod="calico-apiserver-969f74cdf-2lhrt" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--2lhrt-" Nov 8 00:29:32.241157 containerd[1463]: 2025-11-08 00:29:31.833 [INFO][4519] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cb8cc69e36bb17f17747a35015fe9d1e562d4505e1013ddc87f57de7813e1513" Namespace="calico-apiserver" Pod="calico-apiserver-969f74cdf-2lhrt" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--2lhrt-eth0" Nov 8 00:29:32.241157 containerd[1463]: 2025-11-08 00:29:31.988 [INFO][4545] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cb8cc69e36bb17f17747a35015fe9d1e562d4505e1013ddc87f57de7813e1513" HandleID="k8s-pod-network.cb8cc69e36bb17f17747a35015fe9d1e562d4505e1013ddc87f57de7813e1513" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--2lhrt-eth0" Nov 8 00:29:32.241157 containerd[1463]: 2025-11-08 00:29:31.989 [INFO][4545] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cb8cc69e36bb17f17747a35015fe9d1e562d4505e1013ddc87f57de7813e1513" HandleID="k8s-pod-network.cb8cc69e36bb17f17747a35015fe9d1e562d4505e1013ddc87f57de7813e1513" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--2lhrt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000100ae0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", "pod":"calico-apiserver-969f74cdf-2lhrt", "timestamp":"2025-11-08 00:29:31.988221038 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:29:32.241157 containerd[1463]: 2025-11-08 00:29:31.989 [INFO][4545] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:32.241157 containerd[1463]: 2025-11-08 00:29:32.031 [INFO][4545] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:32.241157 containerd[1463]: 2025-11-08 00:29:32.031 [INFO][4545] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562' Nov 8 00:29:32.241157 containerd[1463]: 2025-11-08 00:29:32.087 [INFO][4545] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cb8cc69e36bb17f17747a35015fe9d1e562d4505e1013ddc87f57de7813e1513" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:32.241157 containerd[1463]: 2025-11-08 00:29:32.100 [INFO][4545] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:32.241157 containerd[1463]: 2025-11-08 00:29:32.107 [INFO][4545] ipam/ipam.go 511: Trying affinity for 192.168.38.192/26 host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:32.241157 containerd[1463]: 2025-11-08 00:29:32.112 [INFO][4545] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.192/26 host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:32.241157 containerd[1463]: 2025-11-08 00:29:32.120 [INFO][4545] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.192/26 host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:32.241157 containerd[1463]: 2025-11-08 00:29:32.120 [INFO][4545] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.192/26 handle="k8s-pod-network.cb8cc69e36bb17f17747a35015fe9d1e562d4505e1013ddc87f57de7813e1513" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:32.241157 containerd[1463]: 2025-11-08 00:29:32.125 [INFO][4545] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cb8cc69e36bb17f17747a35015fe9d1e562d4505e1013ddc87f57de7813e1513 Nov 8 00:29:32.241157 containerd[1463]: 2025-11-08 00:29:32.149 [INFO][4545] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.192/26 handle="k8s-pod-network.cb8cc69e36bb17f17747a35015fe9d1e562d4505e1013ddc87f57de7813e1513" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:32.241157 containerd[1463]: 2025-11-08 00:29:32.169 [INFO][4545] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.199/26] block=192.168.38.192/26 handle="k8s-pod-network.cb8cc69e36bb17f17747a35015fe9d1e562d4505e1013ddc87f57de7813e1513" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:32.241157 containerd[1463]: 2025-11-08 00:29:32.169 [INFO][4545] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.199/26] handle="k8s-pod-network.cb8cc69e36bb17f17747a35015fe9d1e562d4505e1013ddc87f57de7813e1513" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:32.241157 containerd[1463]: 2025-11-08 00:29:32.169 [INFO][4545] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:32.241157 containerd[1463]: 2025-11-08 00:29:32.170 [INFO][4545] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.199/26] IPv6=[] ContainerID="cb8cc69e36bb17f17747a35015fe9d1e562d4505e1013ddc87f57de7813e1513" HandleID="k8s-pod-network.cb8cc69e36bb17f17747a35015fe9d1e562d4505e1013ddc87f57de7813e1513" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--2lhrt-eth0" Nov 8 00:29:32.243873 containerd[1463]: 2025-11-08 00:29:32.179 [INFO][4519] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cb8cc69e36bb17f17747a35015fe9d1e562d4505e1013ddc87f57de7813e1513" Namespace="calico-apiserver" Pod="calico-apiserver-969f74cdf-2lhrt" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--2lhrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--2lhrt-eth0", GenerateName:"calico-apiserver-969f74cdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"06ee44fb-81f0-4173-813e-506c57500250", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 28, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"969f74cdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", ContainerID:"", Pod:"calico-apiserver-969f74cdf-2lhrt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3aba865132c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:32.243873 containerd[1463]: 2025-11-08 00:29:32.179 [INFO][4519] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.199/32] ContainerID="cb8cc69e36bb17f17747a35015fe9d1e562d4505e1013ddc87f57de7813e1513" Namespace="calico-apiserver" Pod="calico-apiserver-969f74cdf-2lhrt" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--2lhrt-eth0" Nov 8 00:29:32.243873 containerd[1463]: 2025-11-08 00:29:32.180 [INFO][4519] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3aba865132c ContainerID="cb8cc69e36bb17f17747a35015fe9d1e562d4505e1013ddc87f57de7813e1513" Namespace="calico-apiserver" Pod="calico-apiserver-969f74cdf-2lhrt" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--2lhrt-eth0" Nov 8 00:29:32.243873 containerd[1463]: 2025-11-08 00:29:32.215 [INFO][4519] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cb8cc69e36bb17f17747a35015fe9d1e562d4505e1013ddc87f57de7813e1513" Namespace="calico-apiserver" Pod="calico-apiserver-969f74cdf-2lhrt" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--2lhrt-eth0" Nov 8 00:29:32.243873 containerd[1463]: 2025-11-08 00:29:32.218 [INFO][4519] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cb8cc69e36bb17f17747a35015fe9d1e562d4505e1013ddc87f57de7813e1513" Namespace="calico-apiserver" Pod="calico-apiserver-969f74cdf-2lhrt" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--2lhrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--2lhrt-eth0", GenerateName:"calico-apiserver-969f74cdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"06ee44fb-81f0-4173-813e-506c57500250", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 28, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"969f74cdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", ContainerID:"cb8cc69e36bb17f17747a35015fe9d1e562d4505e1013ddc87f57de7813e1513", Pod:"calico-apiserver-969f74cdf-2lhrt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3aba865132c", MAC:"e2:6f:c4:af:42:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:32.243873 containerd[1463]: 2025-11-08 00:29:32.237 [INFO][4519] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cb8cc69e36bb17f17747a35015fe9d1e562d4505e1013ddc87f57de7813e1513" Namespace="calico-apiserver" Pod="calico-apiserver-969f74cdf-2lhrt" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--2lhrt-eth0" Nov 8 00:29:32.296809 containerd[1463]: time="2025-11-08T00:29:32.296553283Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:32.297758 containerd[1463]: time="2025-11-08T00:29:32.297349642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:32.297758 containerd[1463]: time="2025-11-08T00:29:32.297385178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:32.298925 containerd[1463]: time="2025-11-08T00:29:32.298827684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:32.337974 systemd[1]: Started cri-containerd-cb8cc69e36bb17f17747a35015fe9d1e562d4505e1013ddc87f57de7813e1513.scope - libcontainer container cb8cc69e36bb17f17747a35015fe9d1e562d4505e1013ddc87f57de7813e1513. Nov 8 00:29:32.356166 containerd[1463]: time="2025-11-08T00:29:32.356115605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-969f74cdf-w6w2r,Uid:f4df0bd6-9275-4ffb-bc86-7dbb94791082,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"429ee88324d966808a318f8b0fecca2c6f43cbb48f8e6ee31ede2f3e2e01cb63\"" Nov 8 00:29:32.360843 containerd[1463]: time="2025-11-08T00:29:32.360669151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:29:32.403425 containerd[1463]: time="2025-11-08T00:29:32.403085490Z" level=info msg="StopPodSandbox for \"e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d\"" Nov 8 00:29:32.497372 containerd[1463]: time="2025-11-08T00:29:32.496971514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-969f74cdf-2lhrt,Uid:06ee44fb-81f0-4173-813e-506c57500250,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"cb8cc69e36bb17f17747a35015fe9d1e562d4505e1013ddc87f57de7813e1513\"" Nov 8 00:29:32.537212 containerd[1463]: time="2025-11-08T00:29:32.536957875Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:32.538613 containerd[1463]: time="2025-11-08T00:29:32.538557743Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:29:32.539765 containerd[1463]: time="2025-11-08T00:29:32.538732073Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:29:32.539878 kubelet[2553]: E1108 00:29:32.539580 2553 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:29:32.539878 kubelet[2553]: E1108 00:29:32.539653 2553 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:29:32.541178 kubelet[2553]: E1108 00:29:32.540753 2553 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jpvzx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-969f74cdf-w6w2r_calico-apiserver(f4df0bd6-9275-4ffb-bc86-7dbb94791082): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:32.543026 kubelet[2553]: E1108 00:29:32.541983 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-969f74cdf-w6w2r" podUID="f4df0bd6-9275-4ffb-bc86-7dbb94791082" Nov 8 00:29:32.543632 containerd[1463]: time="2025-11-08T00:29:32.543583032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:29:32.589533 containerd[1463]: 2025-11-08 00:29:32.512 [INFO][4657] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" Nov 8 00:29:32.589533 containerd[1463]: 2025-11-08 00:29:32.512 [INFO][4657] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" iface="eth0" netns="/var/run/netns/cni-ec6b4e18-3876-9522-1ee0-1837acbca836" Nov 8 00:29:32.589533 containerd[1463]: 2025-11-08 00:29:32.514 [INFO][4657] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" iface="eth0" netns="/var/run/netns/cni-ec6b4e18-3876-9522-1ee0-1837acbca836" Nov 8 00:29:32.589533 containerd[1463]: 2025-11-08 00:29:32.515 [INFO][4657] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" iface="eth0" netns="/var/run/netns/cni-ec6b4e18-3876-9522-1ee0-1837acbca836" Nov 8 00:29:32.589533 containerd[1463]: 2025-11-08 00:29:32.515 [INFO][4657] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" Nov 8 00:29:32.589533 containerd[1463]: 2025-11-08 00:29:32.515 [INFO][4657] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" Nov 8 00:29:32.589533 containerd[1463]: 2025-11-08 00:29:32.567 [INFO][4672] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" HandleID="k8s-pod-network.e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-csi--node--driver--kn6nq-eth0" Nov 8 00:29:32.589533 containerd[1463]: 2025-11-08 00:29:32.567 [INFO][4672] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:32.589533 containerd[1463]: 2025-11-08 00:29:32.568 [INFO][4672] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:32.589533 containerd[1463]: 2025-11-08 00:29:32.580 [WARNING][4672] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" HandleID="k8s-pod-network.e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-csi--node--driver--kn6nq-eth0" Nov 8 00:29:32.589533 containerd[1463]: 2025-11-08 00:29:32.580 [INFO][4672] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" HandleID="k8s-pod-network.e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-csi--node--driver--kn6nq-eth0" Nov 8 00:29:32.589533 containerd[1463]: 2025-11-08 00:29:32.583 [INFO][4672] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:32.589533 containerd[1463]: 2025-11-08 00:29:32.586 [INFO][4657] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" Nov 8 00:29:32.590705 containerd[1463]: time="2025-11-08T00:29:32.589899022Z" level=info msg="TearDown network for sandbox \"e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d\" successfully" Nov 8 00:29:32.590705 containerd[1463]: time="2025-11-08T00:29:32.589938240Z" level=info msg="StopPodSandbox for \"e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d\" returns successfully" Nov 8 00:29:32.592612 containerd[1463]: time="2025-11-08T00:29:32.592058149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kn6nq,Uid:5e54b7a9-1c64-4152-ae7f-d4eec2188483,Namespace:calico-system,Attempt:1,}" Nov 8 00:29:32.652158 systemd-networkd[1371]: calic7ace19a124: Gained IPv6LL Nov 8 00:29:32.655197 systemd[1]: run-netns-cni\x2dec6b4e18\x2d3876\x2d9522\x2d1ee0\x2d1837acbca836.mount: Deactivated successfully. Nov 8 00:29:32.725596 containerd[1463]: time="2025-11-08T00:29:32.724940028Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:32.727633 containerd[1463]: time="2025-11-08T00:29:32.727358710Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:29:32.727633 containerd[1463]: time="2025-11-08T00:29:32.727482528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:29:32.729424 kubelet[2553]: E1108 00:29:32.729015 2553 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:29:32.729424 kubelet[2553]: E1108 00:29:32.729097 2553 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:29:32.729424 kubelet[2553]: E1108 00:29:32.729294 2553 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xcp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-969f74cdf-2lhrt_calico-apiserver(06ee44fb-81f0-4173-813e-506c57500250): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:32.731779 kubelet[2553]: E1108 00:29:32.730513 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-969f74cdf-2lhrt" podUID="06ee44fb-81f0-4173-813e-506c57500250" Nov 8 00:29:32.817303 systemd-networkd[1371]: cali1ce0f5cb91c: Link UP Nov 8 00:29:32.818759 systemd-networkd[1371]: cali1ce0f5cb91c: Gained carrier Nov 8 00:29:32.840061 containerd[1463]: 2025-11-08 00:29:32.681 [INFO][4678] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:29:32.840061 containerd[1463]: 2025-11-08 00:29:32.700 [INFO][4678] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-csi--node--driver--kn6nq-eth0 csi-node-driver- calico-system 5e54b7a9-1c64-4152-ae7f-d4eec2188483 1016 0 2025-11-08 00:29:06 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562 csi-node-driver-kn6nq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1ce0f5cb91c [] [] }} ContainerID="560f399f586e8a99f7672dc87827f0476094c55c0e52c6707b499429daf631b4" Namespace="calico-system" Pod="csi-node-driver-kn6nq" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-csi--node--driver--kn6nq-" Nov 8 00:29:32.840061 containerd[1463]: 2025-11-08 00:29:32.701 [INFO][4678] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="560f399f586e8a99f7672dc87827f0476094c55c0e52c6707b499429daf631b4" Namespace="calico-system" Pod="csi-node-driver-kn6nq" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-csi--node--driver--kn6nq-eth0" Nov 8 00:29:32.840061 containerd[1463]: 2025-11-08 00:29:32.755 [INFO][4695] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="560f399f586e8a99f7672dc87827f0476094c55c0e52c6707b499429daf631b4" HandleID="k8s-pod-network.560f399f586e8a99f7672dc87827f0476094c55c0e52c6707b499429daf631b4" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-csi--node--driver--kn6nq-eth0" Nov 8 00:29:32.840061 containerd[1463]: 2025-11-08 00:29:32.755 [INFO][4695] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="560f399f586e8a99f7672dc87827f0476094c55c0e52c6707b499429daf631b4" HandleID="k8s-pod-network.560f399f586e8a99f7672dc87827f0476094c55c0e52c6707b499429daf631b4" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-csi--node--driver--kn6nq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f830), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", "pod":"csi-node-driver-kn6nq", "timestamp":"2025-11-08 00:29:32.755389983 +0000 UTC"}, Hostname:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:29:32.840061 containerd[1463]: 2025-11-08 00:29:32.756 [INFO][4695] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:32.840061 containerd[1463]: 2025-11-08 00:29:32.756 [INFO][4695] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:32.840061 containerd[1463]: 2025-11-08 00:29:32.756 [INFO][4695] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562' Nov 8 00:29:32.840061 containerd[1463]: 2025-11-08 00:29:32.767 [INFO][4695] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.560f399f586e8a99f7672dc87827f0476094c55c0e52c6707b499429daf631b4" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:32.840061 containerd[1463]: 2025-11-08 00:29:32.774 [INFO][4695] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:32.840061 containerd[1463]: 2025-11-08 00:29:32.780 [INFO][4695] ipam/ipam.go 511: Trying affinity for 192.168.38.192/26 host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:32.840061 containerd[1463]: 2025-11-08 00:29:32.783 [INFO][4695] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.192/26 host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:32.840061 containerd[1463]: 2025-11-08 00:29:32.786 [INFO][4695] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.192/26 host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:32.840061 containerd[1463]: 2025-11-08 00:29:32.787 [INFO][4695] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.192/26 handle="k8s-pod-network.560f399f586e8a99f7672dc87827f0476094c55c0e52c6707b499429daf631b4" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:32.840061 containerd[1463]: 2025-11-08 00:29:32.791 [INFO][4695] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.560f399f586e8a99f7672dc87827f0476094c55c0e52c6707b499429daf631b4 Nov 8 00:29:32.840061 containerd[1463]: 2025-11-08 00:29:32.798 [INFO][4695] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.192/26 handle="k8s-pod-network.560f399f586e8a99f7672dc87827f0476094c55c0e52c6707b499429daf631b4" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:32.840061 containerd[1463]: 2025-11-08 00:29:32.810 [INFO][4695] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.200/26] block=192.168.38.192/26 handle="k8s-pod-network.560f399f586e8a99f7672dc87827f0476094c55c0e52c6707b499429daf631b4" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:32.840061 containerd[1463]: 2025-11-08 00:29:32.810 [INFO][4695] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.200/26] handle="k8s-pod-network.560f399f586e8a99f7672dc87827f0476094c55c0e52c6707b499429daf631b4" host="ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562" Nov 8 00:29:32.840061 containerd[1463]: 2025-11-08 00:29:32.810 [INFO][4695] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:32.840061 containerd[1463]: 2025-11-08 00:29:32.810 [INFO][4695] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.200/26] IPv6=[] ContainerID="560f399f586e8a99f7672dc87827f0476094c55c0e52c6707b499429daf631b4" HandleID="k8s-pod-network.560f399f586e8a99f7672dc87827f0476094c55c0e52c6707b499429daf631b4" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-csi--node--driver--kn6nq-eth0" Nov 8 00:29:32.841332 containerd[1463]: 2025-11-08 00:29:32.812 [INFO][4678] cni-plugin/k8s.go 418: Populated endpoint ContainerID="560f399f586e8a99f7672dc87827f0476094c55c0e52c6707b499429daf631b4" Namespace="calico-system" Pod="csi-node-driver-kn6nq" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-csi--node--driver--kn6nq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-csi--node--driver--kn6nq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5e54b7a9-1c64-4152-ae7f-d4eec2188483", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", ContainerID:"", Pod:"csi-node-driver-kn6nq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.38.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1ce0f5cb91c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:32.841332 containerd[1463]: 2025-11-08 00:29:32.812 [INFO][4678] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.200/32] ContainerID="560f399f586e8a99f7672dc87827f0476094c55c0e52c6707b499429daf631b4" Namespace="calico-system" Pod="csi-node-driver-kn6nq" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-csi--node--driver--kn6nq-eth0" Nov 8 00:29:32.841332 containerd[1463]: 2025-11-08 00:29:32.812 [INFO][4678] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ce0f5cb91c ContainerID="560f399f586e8a99f7672dc87827f0476094c55c0e52c6707b499429daf631b4" Namespace="calico-system" Pod="csi-node-driver-kn6nq" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-csi--node--driver--kn6nq-eth0" Nov 8 00:29:32.841332 containerd[1463]: 2025-11-08 00:29:32.815 [INFO][4678] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="560f399f586e8a99f7672dc87827f0476094c55c0e52c6707b499429daf631b4" Namespace="calico-system" Pod="csi-node-driver-kn6nq" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-csi--node--driver--kn6nq-eth0" Nov 8 00:29:32.841332 containerd[1463]: 2025-11-08 00:29:32.817 [INFO][4678] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="560f399f586e8a99f7672dc87827f0476094c55c0e52c6707b499429daf631b4" Namespace="calico-system" Pod="csi-node-driver-kn6nq" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-csi--node--driver--kn6nq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-csi--node--driver--kn6nq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5e54b7a9-1c64-4152-ae7f-d4eec2188483", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", ContainerID:"560f399f586e8a99f7672dc87827f0476094c55c0e52c6707b499429daf631b4", Pod:"csi-node-driver-kn6nq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.38.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1ce0f5cb91c", MAC:"56:30:b5:47:9b:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:32.841332 containerd[1463]: 2025-11-08 00:29:32.836 [INFO][4678] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="560f399f586e8a99f7672dc87827f0476094c55c0e52c6707b499429daf631b4" Namespace="calico-system" Pod="csi-node-driver-kn6nq" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-csi--node--driver--kn6nq-eth0" Nov 8 00:29:32.866992 kubelet[2553]: E1108 00:29:32.866276 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-969f74cdf-2lhrt" podUID="06ee44fb-81f0-4173-813e-506c57500250" Nov 8 00:29:32.870925 kubelet[2553]: E1108 00:29:32.869928 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-969f74cdf-w6w2r" podUID="f4df0bd6-9275-4ffb-bc86-7dbb94791082" Nov 8 00:29:32.871514 kubelet[2553]: E1108 00:29:32.871458 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-b28gg" podUID="418020d8-f003-4c3d-bcf6-3368810f5d40" Nov 8 00:29:32.916776 containerd[1463]: time="2025-11-08T00:29:32.914984311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:32.916776 containerd[1463]: time="2025-11-08T00:29:32.915183994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:32.916776 containerd[1463]: time="2025-11-08T00:29:32.915262432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:32.916776 containerd[1463]: time="2025-11-08T00:29:32.915637507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:32.977960 systemd[1]: Started cri-containerd-560f399f586e8a99f7672dc87827f0476094c55c0e52c6707b499429daf631b4.scope - libcontainer container 560f399f586e8a99f7672dc87827f0476094c55c0e52c6707b499429daf631b4. Nov 8 00:29:33.039177 containerd[1463]: time="2025-11-08T00:29:33.038638876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kn6nq,Uid:5e54b7a9-1c64-4152-ae7f-d4eec2188483,Namespace:calico-system,Attempt:1,} returns sandbox id \"560f399f586e8a99f7672dc87827f0476094c55c0e52c6707b499429daf631b4\"" Nov 8 00:29:33.042342 containerd[1463]: time="2025-11-08T00:29:33.042290100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:29:33.208489 containerd[1463]: time="2025-11-08T00:29:33.208068064Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:33.209662 containerd[1463]: time="2025-11-08T00:29:33.209598042Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:29:33.209845 containerd[1463]: time="2025-11-08T00:29:33.209711109Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:29:33.210028 kubelet[2553]: E1108 00:29:33.209966 2553 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:29:33.210113 kubelet[2553]: E1108 00:29:33.210033 2553 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:29:33.210287 kubelet[2553]: E1108 00:29:33.210230 2553 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q6hps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kn6nq_calico-system(5e54b7a9-1c64-4152-ae7f-d4eec2188483): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:33.213355 containerd[1463]: time="2025-11-08T00:29:33.213318848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:29:33.265825 kubelet[2553]: I1108 00:29:33.265114 2553 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:29:33.292127 systemd-networkd[1371]: cali33ffa381387: Gained IPv6LL Nov 8 00:29:33.381258 containerd[1463]: time="2025-11-08T00:29:33.381190017Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:33.382751 containerd[1463]: time="2025-11-08T00:29:33.382671063Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:29:33.382885 containerd[1463]: time="2025-11-08T00:29:33.382814096Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:29:33.383080 kubelet[2553]: E1108 00:29:33.383030 2553 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:29:33.383224 kubelet[2553]: E1108 00:29:33.383098 2553 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:29:33.383356 kubelet[2553]: E1108 00:29:33.383293 2553 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q6hps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kn6nq_calico-system(5e54b7a9-1c64-4152-ae7f-d4eec2188483): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:33.384963 kubelet[2553]: E1108 00:29:33.384887 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kn6nq" podUID="5e54b7a9-1c64-4152-ae7f-d4eec2188483" Nov 8 00:29:33.740669 systemd-networkd[1371]: cali3aba865132c: Gained IPv6LL Nov 8 00:29:33.876941 kubelet[2553]: E1108 00:29:33.876885 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-969f74cdf-2lhrt" podUID="06ee44fb-81f0-4173-813e-506c57500250" Nov 8 00:29:33.877561 kubelet[2553]: E1108 00:29:33.877007 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-969f74cdf-w6w2r" podUID="f4df0bd6-9275-4ffb-bc86-7dbb94791082" Nov 8 00:29:33.879669 kubelet[2553]: E1108 00:29:33.879612 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kn6nq" podUID="5e54b7a9-1c64-4152-ae7f-d4eec2188483" Nov 8 00:29:34.510789 kernel: bpftool[4828]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 00:29:34.636495 systemd-networkd[1371]: cali1ce0f5cb91c: Gained IPv6LL Nov 8 00:29:34.836245 systemd-networkd[1371]: vxlan.calico: Link UP Nov 8 00:29:34.836257 systemd-networkd[1371]: vxlan.calico: Gained carrier Nov 8 00:29:34.888123 kubelet[2553]: E1108 00:29:34.888053 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kn6nq" podUID="5e54b7a9-1c64-4152-ae7f-d4eec2188483" Nov 8 00:29:36.171995 systemd-networkd[1371]: vxlan.calico: Gained IPv6LL Nov 8 00:29:38.403381 containerd[1463]: time="2025-11-08T00:29:38.402859501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:29:38.558400 containerd[1463]: time="2025-11-08T00:29:38.558306498Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:38.560099 containerd[1463]: time="2025-11-08T00:29:38.560035552Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:29:38.560332 containerd[1463]: time="2025-11-08T00:29:38.560070699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:29:38.560405 kubelet[2553]: E1108 00:29:38.560320 2553 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:29:38.560405 kubelet[2553]: E1108 00:29:38.560386 2553 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:29:38.561006 kubelet[2553]: E1108 00:29:38.560548 2553 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f246e03119bb4746838713acdb1a11df,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lbvcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c54fb4dfc-znnpn_calico-system(5df9e569-a84b-4372-9851-dc5eac1e2252): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:38.565074 containerd[1463]: time="2025-11-08T00:29:38.565003093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:29:38.727363 containerd[1463]: time="2025-11-08T00:29:38.727182615Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:38.729035 containerd[1463]: time="2025-11-08T00:29:38.728959020Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:29:38.729213 containerd[1463]: time="2025-11-08T00:29:38.728964057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:29:38.729519 kubelet[2553]: E1108 00:29:38.729435 2553 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:29:38.729519 kubelet[2553]: E1108 00:29:38.729505 2553 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:29:38.729773 kubelet[2553]: E1108 00:29:38.729678 2553 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lbvcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c54fb4dfc-znnpn_calico-system(5df9e569-a84b-4372-9851-dc5eac1e2252): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:38.731453 kubelet[2553]: E1108 00:29:38.731349 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c54fb4dfc-znnpn" podUID="5df9e569-a84b-4372-9851-dc5eac1e2252" Nov 8 00:29:39.011836 ntpd[1427]: Listen normally on 8 vxlan.calico 192.168.38.192:123 Nov 8 00:29:39.012522 ntpd[1427]: 8 Nov 00:29:39 ntpd[1427]: Listen normally on 8 vxlan.calico 192.168.38.192:123 Nov 8 00:29:39.012522 ntpd[1427]: 8 Nov 00:29:39 ntpd[1427]: Listen normally on 9 cali27b813c3112 [fe80::ecee:eeff:feee:eeee%4]:123 Nov 8 00:29:39.012522 ntpd[1427]: 8 Nov 00:29:39 ntpd[1427]: Listen normally on 10 cali37b8e5627c5 [fe80::ecee:eeff:feee:eeee%5]:123 Nov 8 00:29:39.012522 ntpd[1427]: 8 Nov 00:29:39 ntpd[1427]: Listen normally on 11 cali4e9746f878f [fe80::ecee:eeff:feee:eeee%6]:123 Nov 8 00:29:39.012522 ntpd[1427]: 8 Nov 00:29:39 ntpd[1427]: Listen normally on 12 calidbc6d3df566 [fe80::ecee:eeff:feee:eeee%7]:123 Nov 8 00:29:39.012522 ntpd[1427]: 8 Nov 00:29:39 ntpd[1427]: Listen normally on 13 calic7ace19a124 [fe80::ecee:eeff:feee:eeee%8]:123 Nov 8 00:29:39.012522 ntpd[1427]: 8 Nov 00:29:39 ntpd[1427]: Listen normally on 14 cali33ffa381387 [fe80::ecee:eeff:feee:eeee%9]:123 Nov 8 00:29:39.012522 ntpd[1427]: 8 Nov 00:29:39 ntpd[1427]: Listen normally on 15 cali3aba865132c [fe80::ecee:eeff:feee:eeee%10]:123 Nov 8 00:29:39.012522 ntpd[1427]: 8 Nov 00:29:39 ntpd[1427]: Listen normally on 16 cali1ce0f5cb91c [fe80::ecee:eeff:feee:eeee%11]:123 Nov 8 00:29:39.012522 ntpd[1427]: 8 Nov 00:29:39 ntpd[1427]: Listen normally on 17 vxlan.calico [fe80::64e7:e5ff:fe80:cf45%12]:123 Nov 8 00:29:39.011965 ntpd[1427]: Listen normally on 9 cali27b813c3112 [fe80::ecee:eeff:feee:eeee%4]:123 Nov 8 00:29:39.012048 ntpd[1427]: Listen normally on 10 cali37b8e5627c5 [fe80::ecee:eeff:feee:eeee%5]:123 Nov 8 00:29:39.012117 ntpd[1427]: Listen normally on 11 cali4e9746f878f [fe80::ecee:eeff:feee:eeee%6]:123 Nov 8 00:29:39.012184 ntpd[1427]: Listen normally on 12 calidbc6d3df566 [fe80::ecee:eeff:feee:eeee%7]:123 Nov 8 00:29:39.012245 ntpd[1427]: Listen normally on 13 calic7ace19a124 [fe80::ecee:eeff:feee:eeee%8]:123 Nov 8 00:29:39.012303 ntpd[1427]: Listen normally on 14 cali33ffa381387 [fe80::ecee:eeff:feee:eeee%9]:123 Nov 8 00:29:39.012359 ntpd[1427]: Listen normally on 15 cali3aba865132c [fe80::ecee:eeff:feee:eeee%10]:123 Nov 8 00:29:39.012413 ntpd[1427]: Listen normally on 16 cali1ce0f5cb91c [fe80::ecee:eeff:feee:eeee%11]:123 Nov 8 00:29:39.012468 ntpd[1427]: Listen normally on 17 vxlan.calico [fe80::64e7:e5ff:fe80:cf45%12]:123 Nov 8 00:29:41.407789 containerd[1463]: time="2025-11-08T00:29:41.407684428Z" level=info msg="StopPodSandbox for \"078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033\"" Nov 8 00:29:41.506761 containerd[1463]: 2025-11-08 00:29:41.453 [WARNING][4928] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--kube--controllers--66784f75f9--twnqc-eth0", GenerateName:"calico-kube-controllers-66784f75f9-", Namespace:"calico-system", SelfLink:"", UID:"e59ce4b6-f87c-444d-abb1-31c4a685274a", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66784f75f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", ContainerID:"b33cb0cd471cb1f4889dc2df72719d1ac507c76fefb9340321cd8e58fd4d6f4b", Pod:"calico-kube-controllers-66784f75f9-twnqc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.38.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali37b8e5627c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:41.506761 containerd[1463]: 2025-11-08 00:29:41.453 [INFO][4928] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" Nov 8 00:29:41.506761 containerd[1463]: 2025-11-08 00:29:41.453 [INFO][4928] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" iface="eth0" netns="" Nov 8 00:29:41.506761 containerd[1463]: 2025-11-08 00:29:41.453 [INFO][4928] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" Nov 8 00:29:41.506761 containerd[1463]: 2025-11-08 00:29:41.453 [INFO][4928] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" Nov 8 00:29:41.506761 containerd[1463]: 2025-11-08 00:29:41.482 [INFO][4935] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" HandleID="k8s-pod-network.078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--kube--controllers--66784f75f9--twnqc-eth0" Nov 8 00:29:41.506761 containerd[1463]: 2025-11-08 00:29:41.482 [INFO][4935] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:41.506761 containerd[1463]: 2025-11-08 00:29:41.482 [INFO][4935] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:41.506761 containerd[1463]: 2025-11-08 00:29:41.492 [WARNING][4935] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" HandleID="k8s-pod-network.078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--kube--controllers--66784f75f9--twnqc-eth0" Nov 8 00:29:41.506761 containerd[1463]: 2025-11-08 00:29:41.492 [INFO][4935] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" HandleID="k8s-pod-network.078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--kube--controllers--66784f75f9--twnqc-eth0" Nov 8 00:29:41.506761 containerd[1463]: 2025-11-08 00:29:41.499 [INFO][4935] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:41.506761 containerd[1463]: 2025-11-08 00:29:41.502 [INFO][4928] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" Nov 8 00:29:41.508155 containerd[1463]: time="2025-11-08T00:29:41.506914881Z" level=info msg="TearDown network for sandbox \"078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033\" successfully" Nov 8 00:29:41.508155 containerd[1463]: time="2025-11-08T00:29:41.506956957Z" level=info msg="StopPodSandbox for \"078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033\" returns successfully" Nov 8 00:29:41.508155 containerd[1463]: time="2025-11-08T00:29:41.508124045Z" level=info msg="RemovePodSandbox for \"078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033\"" Nov 8 00:29:41.508673 containerd[1463]: time="2025-11-08T00:29:41.508193677Z" level=info msg="Forcibly stopping sandbox \"078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033\"" Nov 8 00:29:41.608872 containerd[1463]: 2025-11-08 00:29:41.566 [WARNING][4949] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--kube--controllers--66784f75f9--twnqc-eth0", GenerateName:"calico-kube-controllers-66784f75f9-", Namespace:"calico-system", SelfLink:"", UID:"e59ce4b6-f87c-444d-abb1-31c4a685274a", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66784f75f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", ContainerID:"b33cb0cd471cb1f4889dc2df72719d1ac507c76fefb9340321cd8e58fd4d6f4b", Pod:"calico-kube-controllers-66784f75f9-twnqc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.38.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali37b8e5627c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:41.608872 containerd[1463]: 2025-11-08 00:29:41.566 [INFO][4949] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" Nov 8 00:29:41.608872 containerd[1463]: 2025-11-08 00:29:41.567 [INFO][4949] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" iface="eth0" netns="" Nov 8 00:29:41.608872 containerd[1463]: 2025-11-08 00:29:41.567 [INFO][4949] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" Nov 8 00:29:41.608872 containerd[1463]: 2025-11-08 00:29:41.567 [INFO][4949] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" Nov 8 00:29:41.608872 containerd[1463]: 2025-11-08 00:29:41.593 [INFO][4957] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" HandleID="k8s-pod-network.078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--kube--controllers--66784f75f9--twnqc-eth0" Nov 8 00:29:41.608872 containerd[1463]: 2025-11-08 00:29:41.594 [INFO][4957] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:41.608872 containerd[1463]: 2025-11-08 00:29:41.594 [INFO][4957] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:41.608872 containerd[1463]: 2025-11-08 00:29:41.603 [WARNING][4957] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" HandleID="k8s-pod-network.078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--kube--controllers--66784f75f9--twnqc-eth0" Nov 8 00:29:41.608872 containerd[1463]: 2025-11-08 00:29:41.603 [INFO][4957] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" HandleID="k8s-pod-network.078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--kube--controllers--66784f75f9--twnqc-eth0" Nov 8 00:29:41.608872 containerd[1463]: 2025-11-08 00:29:41.605 [INFO][4957] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:41.608872 containerd[1463]: 2025-11-08 00:29:41.607 [INFO][4949] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033" Nov 8 00:29:41.609859 containerd[1463]: time="2025-11-08T00:29:41.608924338Z" level=info msg="TearDown network for sandbox \"078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033\" successfully" Nov 8 00:29:41.614989 containerd[1463]: time="2025-11-08T00:29:41.614931673Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:29:41.615152 containerd[1463]: time="2025-11-08T00:29:41.615019958Z" level=info msg="RemovePodSandbox \"078bc0a9322be88e4cee43ab45f4ecd707ebd0f226b574729ac2b908919cb033\" returns successfully" Nov 8 00:29:41.616262 containerd[1463]: time="2025-11-08T00:29:41.615862206Z" level=info msg="StopPodSandbox for \"1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842\"" Nov 8 00:29:41.706439 containerd[1463]: 2025-11-08 00:29:41.661 [WARNING][4972] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-whisker--66954587fc--lk9h4-eth0" Nov 8 00:29:41.706439 containerd[1463]: 2025-11-08 00:29:41.661 [INFO][4972] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" Nov 8 00:29:41.706439 containerd[1463]: 2025-11-08 00:29:41.661 [INFO][4972] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" iface="eth0" netns="" Nov 8 00:29:41.706439 containerd[1463]: 2025-11-08 00:29:41.661 [INFO][4972] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" Nov 8 00:29:41.706439 containerd[1463]: 2025-11-08 00:29:41.661 [INFO][4972] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" Nov 8 00:29:41.706439 containerd[1463]: 2025-11-08 00:29:41.692 [INFO][4979] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" HandleID="k8s-pod-network.1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-whisker--66954587fc--lk9h4-eth0" Nov 8 00:29:41.706439 containerd[1463]: 2025-11-08 00:29:41.693 [INFO][4979] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:41.706439 containerd[1463]: 2025-11-08 00:29:41.693 [INFO][4979] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:41.706439 containerd[1463]: 2025-11-08 00:29:41.701 [WARNING][4979] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" HandleID="k8s-pod-network.1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-whisker--66954587fc--lk9h4-eth0" Nov 8 00:29:41.706439 containerd[1463]: 2025-11-08 00:29:41.701 [INFO][4979] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" HandleID="k8s-pod-network.1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-whisker--66954587fc--lk9h4-eth0" Nov 8 00:29:41.706439 containerd[1463]: 2025-11-08 00:29:41.703 [INFO][4979] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:41.706439 containerd[1463]: 2025-11-08 00:29:41.704 [INFO][4972] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" Nov 8 00:29:41.706439 containerd[1463]: time="2025-11-08T00:29:41.706052062Z" level=info msg="TearDown network for sandbox \"1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842\" successfully" Nov 8 00:29:41.706439 containerd[1463]: time="2025-11-08T00:29:41.706114637Z" level=info msg="StopPodSandbox for \"1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842\" returns successfully" Nov 8 00:29:41.707266 containerd[1463]: time="2025-11-08T00:29:41.706973515Z" level=info msg="RemovePodSandbox for \"1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842\"" Nov 8 00:29:41.707266 containerd[1463]: time="2025-11-08T00:29:41.707137722Z" level=info msg="Forcibly stopping sandbox \"1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842\"" Nov 8 00:29:41.810195 containerd[1463]: 2025-11-08 00:29:41.754 [WARNING][4993] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" WorkloadEndpoint="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-whisker--66954587fc--lk9h4-eth0" Nov 8 00:29:41.810195 containerd[1463]: 2025-11-08 00:29:41.754 [INFO][4993] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" Nov 8 00:29:41.810195 containerd[1463]: 2025-11-08 00:29:41.754 [INFO][4993] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" iface="eth0" netns="" Nov 8 00:29:41.810195 containerd[1463]: 2025-11-08 00:29:41.755 [INFO][4993] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" Nov 8 00:29:41.810195 containerd[1463]: 2025-11-08 00:29:41.755 [INFO][4993] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" Nov 8 00:29:41.810195 containerd[1463]: 2025-11-08 00:29:41.781 [INFO][5000] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" HandleID="k8s-pod-network.1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-whisker--66954587fc--lk9h4-eth0" Nov 8 00:29:41.810195 containerd[1463]: 2025-11-08 00:29:41.782 [INFO][5000] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:41.810195 containerd[1463]: 2025-11-08 00:29:41.782 [INFO][5000] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:41.810195 containerd[1463]: 2025-11-08 00:29:41.801 [WARNING][5000] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" HandleID="k8s-pod-network.1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-whisker--66954587fc--lk9h4-eth0" Nov 8 00:29:41.810195 containerd[1463]: 2025-11-08 00:29:41.801 [INFO][5000] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" HandleID="k8s-pod-network.1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-whisker--66954587fc--lk9h4-eth0" Nov 8 00:29:41.810195 containerd[1463]: 2025-11-08 00:29:41.804 [INFO][5000] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:41.810195 containerd[1463]: 2025-11-08 00:29:41.807 [INFO][4993] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842" Nov 8 00:29:41.812931 containerd[1463]: time="2025-11-08T00:29:41.810377281Z" level=info msg="TearDown network for sandbox \"1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842\" successfully" Nov 8 00:29:41.817631 containerd[1463]: time="2025-11-08T00:29:41.817400807Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:29:41.817631 containerd[1463]: time="2025-11-08T00:29:41.817480724Z" level=info msg="RemovePodSandbox \"1f8b15ef2bf419f4e11f0cf9563828bdf0b46e8ef36475d9bbb1c8c10f2c9842\" returns successfully" Nov 8 00:29:41.818195 containerd[1463]: time="2025-11-08T00:29:41.818137596Z" level=info msg="StopPodSandbox for \"aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b\"" Nov 8 00:29:41.910032 containerd[1463]: 2025-11-08 00:29:41.866 [WARNING][5014] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--w6w2r-eth0", GenerateName:"calico-apiserver-969f74cdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"f4df0bd6-9275-4ffb-bc86-7dbb94791082", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 28, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"969f74cdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", ContainerID:"429ee88324d966808a318f8b0fecca2c6f43cbb48f8e6ee31ede2f3e2e01cb63", Pod:"calico-apiserver-969f74cdf-w6w2r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali33ffa381387", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:41.910032 containerd[1463]: 2025-11-08 00:29:41.867 [INFO][5014] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" Nov 8 00:29:41.910032 containerd[1463]: 2025-11-08 00:29:41.867 [INFO][5014] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" iface="eth0" netns="" Nov 8 00:29:41.910032 containerd[1463]: 2025-11-08 00:29:41.867 [INFO][5014] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" Nov 8 00:29:41.910032 containerd[1463]: 2025-11-08 00:29:41.867 [INFO][5014] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" Nov 8 00:29:41.910032 containerd[1463]: 2025-11-08 00:29:41.893 [INFO][5021] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" HandleID="k8s-pod-network.aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--w6w2r-eth0" Nov 8 00:29:41.910032 containerd[1463]: 2025-11-08 00:29:41.894 [INFO][5021] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:41.910032 containerd[1463]: 2025-11-08 00:29:41.894 [INFO][5021] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:41.910032 containerd[1463]: 2025-11-08 00:29:41.904 [WARNING][5021] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" HandleID="k8s-pod-network.aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--w6w2r-eth0" Nov 8 00:29:41.910032 containerd[1463]: 2025-11-08 00:29:41.904 [INFO][5021] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" HandleID="k8s-pod-network.aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--w6w2r-eth0" Nov 8 00:29:41.910032 containerd[1463]: 2025-11-08 00:29:41.907 [INFO][5021] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:41.910032 containerd[1463]: 2025-11-08 00:29:41.908 [INFO][5014] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" Nov 8 00:29:41.911301 containerd[1463]: time="2025-11-08T00:29:41.910209341Z" level=info msg="TearDown network for sandbox \"aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b\" successfully" Nov 8 00:29:41.911301 containerd[1463]: time="2025-11-08T00:29:41.910243541Z" level=info msg="StopPodSandbox for \"aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b\" returns successfully" Nov 8 00:29:41.911610 containerd[1463]: time="2025-11-08T00:29:41.911574044Z" level=info msg="RemovePodSandbox for \"aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b\"" Nov 8 00:29:41.911763 containerd[1463]: time="2025-11-08T00:29:41.911615103Z" level=info msg="Forcibly stopping sandbox \"aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b\"" Nov 8 00:29:41.996295 containerd[1463]: 2025-11-08 00:29:41.953 [WARNING][5035] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--w6w2r-eth0", GenerateName:"calico-apiserver-969f74cdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"f4df0bd6-9275-4ffb-bc86-7dbb94791082", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 28, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"969f74cdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", ContainerID:"429ee88324d966808a318f8b0fecca2c6f43cbb48f8e6ee31ede2f3e2e01cb63", Pod:"calico-apiserver-969f74cdf-w6w2r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali33ffa381387", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:41.996295 containerd[1463]: 2025-11-08 00:29:41.954 [INFO][5035] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" Nov 8 00:29:41.996295 containerd[1463]: 2025-11-08 00:29:41.954 [INFO][5035] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" iface="eth0" netns="" Nov 8 00:29:41.996295 containerd[1463]: 2025-11-08 00:29:41.954 [INFO][5035] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" Nov 8 00:29:41.996295 containerd[1463]: 2025-11-08 00:29:41.954 [INFO][5035] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" Nov 8 00:29:41.996295 containerd[1463]: 2025-11-08 00:29:41.981 [INFO][5042] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" HandleID="k8s-pod-network.aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--w6w2r-eth0" Nov 8 00:29:41.996295 containerd[1463]: 2025-11-08 00:29:41.981 [INFO][5042] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:41.996295 containerd[1463]: 2025-11-08 00:29:41.981 [INFO][5042] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:41.996295 containerd[1463]: 2025-11-08 00:29:41.989 [WARNING][5042] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" HandleID="k8s-pod-network.aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--w6w2r-eth0" Nov 8 00:29:41.996295 containerd[1463]: 2025-11-08 00:29:41.989 [INFO][5042] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" HandleID="k8s-pod-network.aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--w6w2r-eth0" Nov 8 00:29:41.996295 containerd[1463]: 2025-11-08 00:29:41.991 [INFO][5042] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:41.996295 containerd[1463]: 2025-11-08 00:29:41.992 [INFO][5035] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b" Nov 8 00:29:41.996295 containerd[1463]: time="2025-11-08T00:29:41.994710945Z" level=info msg="TearDown network for sandbox \"aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b\" successfully" Nov 8 00:29:42.000389 containerd[1463]: time="2025-11-08T00:29:42.000341119Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:29:42.000578 containerd[1463]: time="2025-11-08T00:29:42.000424381Z" level=info msg="RemovePodSandbox \"aa1006a2db2f0fc31ed75327d318196c310ed277b894a7931d8923459e506b9b\" returns successfully" Nov 8 00:29:42.001398 containerd[1463]: time="2025-11-08T00:29:42.001353726Z" level=info msg="StopPodSandbox for \"bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af\"" Nov 8 00:29:42.092618 containerd[1463]: 2025-11-08 00:29:42.050 [WARNING][5056] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--vbsgj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f25962ab-cc66-4ae1-b3d0-2209da78cffc", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 28, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", ContainerID:"8c99d9f94428d41d2157bbee1181e26c95a723e13ed327360d6fbbf9e1253d8d", Pod:"coredns-668d6bf9bc-vbsgj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e9746f878f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:42.092618 containerd[1463]: 2025-11-08 00:29:42.050 [INFO][5056] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" Nov 8 00:29:42.092618 containerd[1463]: 2025-11-08 00:29:42.050 [INFO][5056] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" iface="eth0" netns="" Nov 8 00:29:42.092618 containerd[1463]: 2025-11-08 00:29:42.050 [INFO][5056] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" Nov 8 00:29:42.092618 containerd[1463]: 2025-11-08 00:29:42.050 [INFO][5056] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" Nov 8 00:29:42.092618 containerd[1463]: 2025-11-08 00:29:42.078 [INFO][5064] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" HandleID="k8s-pod-network.bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--vbsgj-eth0" Nov 8 00:29:42.092618 containerd[1463]: 2025-11-08 00:29:42.078 [INFO][5064] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:42.092618 containerd[1463]: 2025-11-08 00:29:42.078 [INFO][5064] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:42.092618 containerd[1463]: 2025-11-08 00:29:42.087 [WARNING][5064] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" HandleID="k8s-pod-network.bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--vbsgj-eth0" Nov 8 00:29:42.092618 containerd[1463]: 2025-11-08 00:29:42.088 [INFO][5064] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" HandleID="k8s-pod-network.bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--vbsgj-eth0" Nov 8 00:29:42.092618 containerd[1463]: 2025-11-08 00:29:42.089 [INFO][5064] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:42.092618 containerd[1463]: 2025-11-08 00:29:42.091 [INFO][5056] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" Nov 8 00:29:42.093427 containerd[1463]: time="2025-11-08T00:29:42.092658344Z" level=info msg="TearDown network for sandbox \"bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af\" successfully" Nov 8 00:29:42.093427 containerd[1463]: time="2025-11-08T00:29:42.092697015Z" level=info msg="StopPodSandbox for \"bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af\" returns successfully" Nov 8 00:29:42.093678 containerd[1463]: time="2025-11-08T00:29:42.093611619Z" level=info msg="RemovePodSandbox for \"bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af\"" Nov 8 00:29:42.093678 containerd[1463]: time="2025-11-08T00:29:42.093653568Z" level=info msg="Forcibly stopping sandbox \"bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af\"" Nov 8 00:29:42.185414 containerd[1463]: 2025-11-08 00:29:42.141 [WARNING][5078] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--vbsgj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f25962ab-cc66-4ae1-b3d0-2209da78cffc", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 28, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", ContainerID:"8c99d9f94428d41d2157bbee1181e26c95a723e13ed327360d6fbbf9e1253d8d", Pod:"coredns-668d6bf9bc-vbsgj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e9746f878f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:42.185414 containerd[1463]: 2025-11-08 00:29:42.142 [INFO][5078] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" Nov 8 00:29:42.185414 containerd[1463]: 2025-11-08 00:29:42.142 [INFO][5078] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" iface="eth0" netns="" Nov 8 00:29:42.185414 containerd[1463]: 2025-11-08 00:29:42.142 [INFO][5078] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" Nov 8 00:29:42.185414 containerd[1463]: 2025-11-08 00:29:42.142 [INFO][5078] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" Nov 8 00:29:42.185414 containerd[1463]: 2025-11-08 00:29:42.170 [INFO][5086] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" HandleID="k8s-pod-network.bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--vbsgj-eth0" Nov 8 00:29:42.185414 containerd[1463]: 2025-11-08 00:29:42.170 [INFO][5086] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:42.185414 containerd[1463]: 2025-11-08 00:29:42.170 [INFO][5086] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:42.185414 containerd[1463]: 2025-11-08 00:29:42.179 [WARNING][5086] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" HandleID="k8s-pod-network.bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--vbsgj-eth0" Nov 8 00:29:42.185414 containerd[1463]: 2025-11-08 00:29:42.179 [INFO][5086] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" HandleID="k8s-pod-network.bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--vbsgj-eth0" Nov 8 00:29:42.185414 containerd[1463]: 2025-11-08 00:29:42.181 [INFO][5086] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:42.185414 containerd[1463]: 2025-11-08 00:29:42.183 [INFO][5078] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af" Nov 8 00:29:42.186696 containerd[1463]: time="2025-11-08T00:29:42.185471274Z" level=info msg="TearDown network for sandbox \"bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af\" successfully" Nov 8 00:29:42.190313 containerd[1463]: time="2025-11-08T00:29:42.190181605Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:29:42.190483 containerd[1463]: time="2025-11-08T00:29:42.190368017Z" level=info msg="RemovePodSandbox \"bf0b6bf919366855eed91eaf67ca366ca485dcb51d3d68777efa1f82c2d6f0af\" returns successfully" Nov 8 00:29:42.191138 containerd[1463]: time="2025-11-08T00:29:42.191099659Z" level=info msg="StopPodSandbox for \"e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d\"" Nov 8 00:29:42.281369 containerd[1463]: 2025-11-08 00:29:42.238 [WARNING][5100] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-csi--node--driver--kn6nq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5e54b7a9-1c64-4152-ae7f-d4eec2188483", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", ContainerID:"560f399f586e8a99f7672dc87827f0476094c55c0e52c6707b499429daf631b4", Pod:"csi-node-driver-kn6nq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.38.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1ce0f5cb91c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:42.281369 containerd[1463]: 2025-11-08 00:29:42.239 [INFO][5100] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" Nov 8 00:29:42.281369 containerd[1463]: 2025-11-08 00:29:42.239 [INFO][5100] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" iface="eth0" netns="" Nov 8 00:29:42.281369 containerd[1463]: 2025-11-08 00:29:42.239 [INFO][5100] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" Nov 8 00:29:42.281369 containerd[1463]: 2025-11-08 00:29:42.239 [INFO][5100] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" Nov 8 00:29:42.281369 containerd[1463]: 2025-11-08 00:29:42.266 [INFO][5108] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" HandleID="k8s-pod-network.e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-csi--node--driver--kn6nq-eth0" Nov 8 00:29:42.281369 containerd[1463]: 2025-11-08 00:29:42.266 [INFO][5108] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:42.281369 containerd[1463]: 2025-11-08 00:29:42.267 [INFO][5108] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:42.281369 containerd[1463]: 2025-11-08 00:29:42.275 [WARNING][5108] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" HandleID="k8s-pod-network.e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-csi--node--driver--kn6nq-eth0" Nov 8 00:29:42.281369 containerd[1463]: 2025-11-08 00:29:42.275 [INFO][5108] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" HandleID="k8s-pod-network.e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-csi--node--driver--kn6nq-eth0" Nov 8 00:29:42.281369 containerd[1463]: 2025-11-08 00:29:42.278 [INFO][5108] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:42.281369 containerd[1463]: 2025-11-08 00:29:42.279 [INFO][5100] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" Nov 8 00:29:42.282329 containerd[1463]: time="2025-11-08T00:29:42.281456723Z" level=info msg="TearDown network for sandbox \"e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d\" successfully" Nov 8 00:29:42.282329 containerd[1463]: time="2025-11-08T00:29:42.281494210Z" level=info msg="StopPodSandbox for \"e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d\" returns successfully" Nov 8 00:29:42.284203 containerd[1463]: time="2025-11-08T00:29:42.284130816Z" level=info msg="RemovePodSandbox for \"e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d\"" Nov 8 00:29:42.284203 containerd[1463]: time="2025-11-08T00:29:42.284175557Z" level=info msg="Forcibly stopping sandbox \"e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d\"" Nov 8 00:29:42.378241 containerd[1463]: 2025-11-08 00:29:42.332 [WARNING][5122] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-csi--node--driver--kn6nq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5e54b7a9-1c64-4152-ae7f-d4eec2188483", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", ContainerID:"560f399f586e8a99f7672dc87827f0476094c55c0e52c6707b499429daf631b4", Pod:"csi-node-driver-kn6nq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.38.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1ce0f5cb91c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:42.378241 containerd[1463]: 2025-11-08 00:29:42.333 [INFO][5122] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" Nov 8 00:29:42.378241 containerd[1463]: 2025-11-08 00:29:42.333 [INFO][5122] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" iface="eth0" netns="" Nov 8 00:29:42.378241 containerd[1463]: 2025-11-08 00:29:42.333 [INFO][5122] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" Nov 8 00:29:42.378241 containerd[1463]: 2025-11-08 00:29:42.333 [INFO][5122] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" Nov 8 00:29:42.378241 containerd[1463]: 2025-11-08 00:29:42.361 [INFO][5129] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" HandleID="k8s-pod-network.e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-csi--node--driver--kn6nq-eth0" Nov 8 00:29:42.378241 containerd[1463]: 2025-11-08 00:29:42.361 [INFO][5129] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:42.378241 containerd[1463]: 2025-11-08 00:29:42.361 [INFO][5129] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:42.378241 containerd[1463]: 2025-11-08 00:29:42.370 [WARNING][5129] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" HandleID="k8s-pod-network.e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-csi--node--driver--kn6nq-eth0" Nov 8 00:29:42.378241 containerd[1463]: 2025-11-08 00:29:42.370 [INFO][5129] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" HandleID="k8s-pod-network.e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-csi--node--driver--kn6nq-eth0" Nov 8 00:29:42.378241 containerd[1463]: 2025-11-08 00:29:42.372 [INFO][5129] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:42.378241 containerd[1463]: 2025-11-08 00:29:42.375 [INFO][5122] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d" Nov 8 00:29:42.379475 containerd[1463]: time="2025-11-08T00:29:42.378282385Z" level=info msg="TearDown network for sandbox \"e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d\" successfully" Nov 8 00:29:42.382984 containerd[1463]: time="2025-11-08T00:29:42.382933054Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:29:42.383133 containerd[1463]: time="2025-11-08T00:29:42.383012271Z" level=info msg="RemovePodSandbox \"e1954c932f0b824a8bfc3193c40bad101818bbb6abbe1c7c317ac8ff8d3df87d\" returns successfully" Nov 8 00:29:42.383651 containerd[1463]: time="2025-11-08T00:29:42.383615884Z" level=info msg="StopPodSandbox for \"097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f\"" Nov 8 00:29:42.471703 containerd[1463]: 2025-11-08 00:29:42.428 [WARNING][5143] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-goldmane--666569f655--b28gg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"418020d8-f003-4c3d-bcf6-3368810f5d40", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", ContainerID:"38b96ffe5f50ab5dfefa8b9adf8490e4a6ea1893813ed1686c17616a58c82bfd", Pod:"goldmane-666569f655-b28gg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.38.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic7ace19a124", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:42.471703 containerd[1463]: 2025-11-08 00:29:42.428 [INFO][5143] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" Nov 8 00:29:42.471703 containerd[1463]: 2025-11-08 00:29:42.428 [INFO][5143] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" iface="eth0" netns="" Nov 8 00:29:42.471703 containerd[1463]: 2025-11-08 00:29:42.428 [INFO][5143] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" Nov 8 00:29:42.471703 containerd[1463]: 2025-11-08 00:29:42.428 [INFO][5143] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" Nov 8 00:29:42.471703 containerd[1463]: 2025-11-08 00:29:42.457 [INFO][5150] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" HandleID="k8s-pod-network.097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-goldmane--666569f655--b28gg-eth0" Nov 8 00:29:42.471703 containerd[1463]: 2025-11-08 00:29:42.457 [INFO][5150] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:42.471703 containerd[1463]: 2025-11-08 00:29:42.457 [INFO][5150] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:42.471703 containerd[1463]: 2025-11-08 00:29:42.466 [WARNING][5150] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" HandleID="k8s-pod-network.097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-goldmane--666569f655--b28gg-eth0" Nov 8 00:29:42.471703 containerd[1463]: 2025-11-08 00:29:42.466 [INFO][5150] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" HandleID="k8s-pod-network.097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-goldmane--666569f655--b28gg-eth0" Nov 8 00:29:42.471703 containerd[1463]: 2025-11-08 00:29:42.468 [INFO][5150] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:42.471703 containerd[1463]: 2025-11-08 00:29:42.470 [INFO][5143] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" Nov 8 00:29:42.474210 containerd[1463]: time="2025-11-08T00:29:42.471874991Z" level=info msg="TearDown network for sandbox \"097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f\" successfully" Nov 8 00:29:42.474210 containerd[1463]: time="2025-11-08T00:29:42.471932942Z" level=info msg="StopPodSandbox for \"097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f\" returns successfully" Nov 8 00:29:42.474210 containerd[1463]: time="2025-11-08T00:29:42.472897199Z" level=info msg="RemovePodSandbox for \"097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f\"" Nov 8 00:29:42.474210 containerd[1463]: time="2025-11-08T00:29:42.472933653Z" level=info msg="Forcibly stopping sandbox \"097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f\"" Nov 8 00:29:42.577844 containerd[1463]: 2025-11-08 00:29:42.525 [WARNING][5164] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-goldmane--666569f655--b28gg-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"418020d8-f003-4c3d-bcf6-3368810f5d40", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", ContainerID:"38b96ffe5f50ab5dfefa8b9adf8490e4a6ea1893813ed1686c17616a58c82bfd", Pod:"goldmane-666569f655-b28gg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.38.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic7ace19a124", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:42.577844 containerd[1463]: 2025-11-08 00:29:42.525 [INFO][5164] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" Nov 8 00:29:42.577844 containerd[1463]: 2025-11-08 00:29:42.525 [INFO][5164] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" iface="eth0" netns="" Nov 8 00:29:42.577844 containerd[1463]: 2025-11-08 00:29:42.525 [INFO][5164] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" Nov 8 00:29:42.577844 containerd[1463]: 2025-11-08 00:29:42.525 [INFO][5164] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" Nov 8 00:29:42.577844 containerd[1463]: 2025-11-08 00:29:42.560 [INFO][5171] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" HandleID="k8s-pod-network.097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-goldmane--666569f655--b28gg-eth0" Nov 8 00:29:42.577844 containerd[1463]: 2025-11-08 00:29:42.560 [INFO][5171] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:42.577844 containerd[1463]: 2025-11-08 00:29:42.560 [INFO][5171] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:42.577844 containerd[1463]: 2025-11-08 00:29:42.570 [WARNING][5171] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" HandleID="k8s-pod-network.097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-goldmane--666569f655--b28gg-eth0" Nov 8 00:29:42.577844 containerd[1463]: 2025-11-08 00:29:42.571 [INFO][5171] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" HandleID="k8s-pod-network.097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-goldmane--666569f655--b28gg-eth0" Nov 8 00:29:42.577844 containerd[1463]: 2025-11-08 00:29:42.572 [INFO][5171] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:42.577844 containerd[1463]: 2025-11-08 00:29:42.574 [INFO][5164] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f" Nov 8 00:29:42.577844 containerd[1463]: time="2025-11-08T00:29:42.576313538Z" level=info msg="TearDown network for sandbox \"097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f\" successfully" Nov 8 00:29:42.583119 containerd[1463]: time="2025-11-08T00:29:42.582791667Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:29:42.583119 containerd[1463]: time="2025-11-08T00:29:42.582881614Z" level=info msg="RemovePodSandbox \"097b09c58982d8ff4062747aded1a9f8124a4cefee78e9960204d0b7f0516e1f\" returns successfully" Nov 8 00:29:42.583693 containerd[1463]: time="2025-11-08T00:29:42.583636769Z" level=info msg="StopPodSandbox for \"ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1\"" Nov 8 00:29:42.676073 containerd[1463]: 2025-11-08 00:29:42.631 [WARNING][5185] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--zfhqj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3f87c08c-b49d-4506-b03a-99a6bbfdb418", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 28, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", ContainerID:"7c08764f5a2cd350393be2a341df89a3c1a590c671f6c1344c836c0f878e84e8", Pod:"coredns-668d6bf9bc-zfhqj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidbc6d3df566", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:42.676073 containerd[1463]: 2025-11-08 00:29:42.631 [INFO][5185] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" Nov 8 00:29:42.676073 containerd[1463]: 2025-11-08 00:29:42.631 [INFO][5185] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" iface="eth0" netns="" Nov 8 00:29:42.676073 containerd[1463]: 2025-11-08 00:29:42.631 [INFO][5185] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" Nov 8 00:29:42.676073 containerd[1463]: 2025-11-08 00:29:42.631 [INFO][5185] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" Nov 8 00:29:42.676073 containerd[1463]: 2025-11-08 00:29:42.658 [INFO][5192] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" HandleID="k8s-pod-network.ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--zfhqj-eth0" Nov 8 00:29:42.676073 containerd[1463]: 2025-11-08 00:29:42.658 [INFO][5192] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:42.676073 containerd[1463]: 2025-11-08 00:29:42.658 [INFO][5192] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:42.676073 containerd[1463]: 2025-11-08 00:29:42.670 [WARNING][5192] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" HandleID="k8s-pod-network.ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--zfhqj-eth0" Nov 8 00:29:42.676073 containerd[1463]: 2025-11-08 00:29:42.670 [INFO][5192] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" HandleID="k8s-pod-network.ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--zfhqj-eth0" Nov 8 00:29:42.676073 containerd[1463]: 2025-11-08 00:29:42.672 [INFO][5192] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:42.676073 containerd[1463]: 2025-11-08 00:29:42.674 [INFO][5185] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" Nov 8 00:29:42.676907 containerd[1463]: time="2025-11-08T00:29:42.676122121Z" level=info msg="TearDown network for sandbox \"ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1\" successfully" Nov 8 00:29:42.676907 containerd[1463]: time="2025-11-08T00:29:42.676160491Z" level=info msg="StopPodSandbox for \"ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1\" returns successfully" Nov 8 00:29:42.677017 containerd[1463]: time="2025-11-08T00:29:42.676927520Z" level=info msg="RemovePodSandbox for \"ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1\"" Nov 8 00:29:42.677017 containerd[1463]: time="2025-11-08T00:29:42.676966410Z" level=info msg="Forcibly stopping sandbox \"ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1\"" Nov 8 00:29:42.770233 containerd[1463]: 2025-11-08 00:29:42.723 [WARNING][5206] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--zfhqj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3f87c08c-b49d-4506-b03a-99a6bbfdb418", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 28, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", ContainerID:"7c08764f5a2cd350393be2a341df89a3c1a590c671f6c1344c836c0f878e84e8", Pod:"coredns-668d6bf9bc-zfhqj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidbc6d3df566", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:42.770233 containerd[1463]: 2025-11-08 00:29:42.724 [INFO][5206] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" Nov 8 00:29:42.770233 containerd[1463]: 2025-11-08 00:29:42.724 [INFO][5206] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" iface="eth0" netns="" Nov 8 00:29:42.770233 containerd[1463]: 2025-11-08 00:29:42.724 [INFO][5206] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" Nov 8 00:29:42.770233 containerd[1463]: 2025-11-08 00:29:42.724 [INFO][5206] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" Nov 8 00:29:42.770233 containerd[1463]: 2025-11-08 00:29:42.750 [INFO][5213] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" HandleID="k8s-pod-network.ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--zfhqj-eth0" Nov 8 00:29:42.770233 containerd[1463]: 2025-11-08 00:29:42.751 [INFO][5213] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:42.770233 containerd[1463]: 2025-11-08 00:29:42.751 [INFO][5213] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:42.770233 containerd[1463]: 2025-11-08 00:29:42.760 [WARNING][5213] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" HandleID="k8s-pod-network.ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--zfhqj-eth0" Nov 8 00:29:42.770233 containerd[1463]: 2025-11-08 00:29:42.760 [INFO][5213] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" HandleID="k8s-pod-network.ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-coredns--668d6bf9bc--zfhqj-eth0" Nov 8 00:29:42.770233 containerd[1463]: 2025-11-08 00:29:42.764 [INFO][5213] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:42.770233 containerd[1463]: 2025-11-08 00:29:42.767 [INFO][5206] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1" Nov 8 00:29:42.771193 containerd[1463]: time="2025-11-08T00:29:42.770329937Z" level=info msg="TearDown network for sandbox \"ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1\" successfully" Nov 8 00:29:42.776430 containerd[1463]: time="2025-11-08T00:29:42.775604521Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:29:42.776430 containerd[1463]: time="2025-11-08T00:29:42.775679487Z" level=info msg="RemovePodSandbox \"ac885e757fe8994fbc088f0e507f65700b5c5cf818ceec703c735a056be3b1a1\" returns successfully" Nov 8 00:29:42.776616 containerd[1463]: time="2025-11-08T00:29:42.776565918Z" level=info msg="StopPodSandbox for \"8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454\"" Nov 8 00:29:42.873220 containerd[1463]: 2025-11-08 00:29:42.829 [WARNING][5227] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--2lhrt-eth0", GenerateName:"calico-apiserver-969f74cdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"06ee44fb-81f0-4173-813e-506c57500250", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 28, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"969f74cdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", ContainerID:"cb8cc69e36bb17f17747a35015fe9d1e562d4505e1013ddc87f57de7813e1513", Pod:"calico-apiserver-969f74cdf-2lhrt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3aba865132c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:42.873220 containerd[1463]: 2025-11-08 00:29:42.830 [INFO][5227] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" Nov 8 00:29:42.873220 containerd[1463]: 2025-11-08 00:29:42.830 [INFO][5227] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" iface="eth0" netns="" Nov 8 00:29:42.873220 containerd[1463]: 2025-11-08 00:29:42.830 [INFO][5227] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" Nov 8 00:29:42.873220 containerd[1463]: 2025-11-08 00:29:42.830 [INFO][5227] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" Nov 8 00:29:42.873220 containerd[1463]: 2025-11-08 00:29:42.859 [INFO][5235] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" HandleID="k8s-pod-network.8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--2lhrt-eth0" Nov 8 00:29:42.873220 containerd[1463]: 2025-11-08 00:29:42.859 [INFO][5235] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:42.873220 containerd[1463]: 2025-11-08 00:29:42.860 [INFO][5235] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:42.873220 containerd[1463]: 2025-11-08 00:29:42.868 [WARNING][5235] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" HandleID="k8s-pod-network.8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--2lhrt-eth0" Nov 8 00:29:42.873220 containerd[1463]: 2025-11-08 00:29:42.868 [INFO][5235] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" HandleID="k8s-pod-network.8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--2lhrt-eth0" Nov 8 00:29:42.873220 containerd[1463]: 2025-11-08 00:29:42.870 [INFO][5235] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:42.873220 containerd[1463]: 2025-11-08 00:29:42.871 [INFO][5227] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" Nov 8 00:29:42.873220 containerd[1463]: time="2025-11-08T00:29:42.873179974Z" level=info msg="TearDown network for sandbox \"8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454\" successfully" Nov 8 00:29:42.873220 containerd[1463]: time="2025-11-08T00:29:42.873213180Z" level=info msg="StopPodSandbox for \"8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454\" returns successfully" Nov 8 00:29:42.876009 containerd[1463]: time="2025-11-08T00:29:42.875968711Z" level=info msg="RemovePodSandbox for \"8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454\"" Nov 8 00:29:42.876107 containerd[1463]: time="2025-11-08T00:29:42.876024929Z" level=info msg="Forcibly stopping sandbox \"8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454\"" Nov 8 00:29:42.975420 containerd[1463]: 2025-11-08 00:29:42.932 [WARNING][5249] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--2lhrt-eth0", GenerateName:"calico-apiserver-969f74cdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"06ee44fb-81f0-4173-813e-506c57500250", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 28, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"969f74cdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-nightly-20251107-2100-0d3a689ec1c8c9b3e562", ContainerID:"cb8cc69e36bb17f17747a35015fe9d1e562d4505e1013ddc87f57de7813e1513", Pod:"calico-apiserver-969f74cdf-2lhrt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3aba865132c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:42.975420 containerd[1463]: 2025-11-08 00:29:42.933 [INFO][5249] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" Nov 8 00:29:42.975420 containerd[1463]: 2025-11-08 00:29:42.933 [INFO][5249] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" iface="eth0" netns="" Nov 8 00:29:42.975420 containerd[1463]: 2025-11-08 00:29:42.933 [INFO][5249] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" Nov 8 00:29:42.975420 containerd[1463]: 2025-11-08 00:29:42.933 [INFO][5249] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" Nov 8 00:29:42.975420 containerd[1463]: 2025-11-08 00:29:42.961 [INFO][5256] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" HandleID="k8s-pod-network.8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--2lhrt-eth0" Nov 8 00:29:42.975420 containerd[1463]: 2025-11-08 00:29:42.962 [INFO][5256] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:42.975420 containerd[1463]: 2025-11-08 00:29:42.962 [INFO][5256] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:42.975420 containerd[1463]: 2025-11-08 00:29:42.970 [WARNING][5256] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" HandleID="k8s-pod-network.8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--2lhrt-eth0" Nov 8 00:29:42.975420 containerd[1463]: 2025-11-08 00:29:42.970 [INFO][5256] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" HandleID="k8s-pod-network.8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" Workload="ci--4081--3--6--nightly--20251107--2100--0d3a689ec1c8c9b3e562-k8s-calico--apiserver--969f74cdf--2lhrt-eth0" Nov 8 00:29:42.975420 containerd[1463]: 2025-11-08 00:29:42.972 [INFO][5256] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:42.975420 containerd[1463]: 2025-11-08 00:29:42.973 [INFO][5249] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454" Nov 8 00:29:42.976388 containerd[1463]: time="2025-11-08T00:29:42.975484283Z" level=info msg="TearDown network for sandbox \"8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454\" successfully" Nov 8 00:29:42.980855 containerd[1463]: time="2025-11-08T00:29:42.980784482Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:29:42.980998 containerd[1463]: time="2025-11-08T00:29:42.980869604Z" level=info msg="RemovePodSandbox \"8bc728a2f3210e4261c140c0db0cd047ed4aa66ffbf8bfd83d494f41e93bc454\" returns successfully" Nov 8 00:29:43.402842 containerd[1463]: time="2025-11-08T00:29:43.402381802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:29:43.568273 containerd[1463]: time="2025-11-08T00:29:43.568211547Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:43.569649 containerd[1463]: time="2025-11-08T00:29:43.569588376Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:29:43.569895 containerd[1463]: time="2025-11-08T00:29:43.569624160Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:29:43.569958 kubelet[2553]: E1108 00:29:43.569883 2553 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:29:43.570382 kubelet[2553]: E1108 00:29:43.569951 2553 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:29:43.570382 kubelet[2553]: E1108 00:29:43.570171 2553 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2mb96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-66784f75f9-twnqc_calico-system(e59ce4b6-f87c-444d-abb1-31c4a685274a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:43.571875 kubelet[2553]: E1108 00:29:43.571819 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66784f75f9-twnqc" podUID="e59ce4b6-f87c-444d-abb1-31c4a685274a" Nov 8 00:29:44.402691 containerd[1463]: time="2025-11-08T00:29:44.402448321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:29:44.570971 containerd[1463]: time="2025-11-08T00:29:44.570892545Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:44.572407 containerd[1463]: time="2025-11-08T00:29:44.572347909Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:29:44.572527 containerd[1463]: time="2025-11-08T00:29:44.572462756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:29:44.572850 kubelet[2553]: E1108 00:29:44.572784 2553 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:29:44.573370 kubelet[2553]: E1108 00:29:44.572851 2553 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:29:44.573370 kubelet[2553]: E1108 00:29:44.573067 2553 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jzv5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-b28gg_calico-system(418020d8-f003-4c3d-bcf6-3368810f5d40): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:44.574945 kubelet[2553]: E1108 00:29:44.574326 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-b28gg" podUID="418020d8-f003-4c3d-bcf6-3368810f5d40" Nov 8 00:29:46.893146 systemd[1]: Started sshd@7-10.128.0.61:22-139.178.89.65:58404.service - OpenSSH per-connection server daemon (139.178.89.65:58404). Nov 8 00:29:47.181058 sshd[5274]: Accepted publickey for core from 139.178.89.65 port 58404 ssh2: RSA SHA256:ogwUEVYP/0oM1umZiqSKcXs0yHGk8j4B+2jq1gLm8nQ Nov 8 00:29:47.183035 sshd[5274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:47.190114 systemd-logind[1440]: New session 8 of user core. Nov 8 00:29:47.197954 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:29:47.406758 containerd[1463]: time="2025-11-08T00:29:47.405944953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:29:47.494782 sshd[5274]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:47.505430 systemd-logind[1440]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:29:47.506633 systemd[1]: sshd@7-10.128.0.61:22-139.178.89.65:58404.service: Deactivated successfully. Nov 8 00:29:47.509705 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:29:47.511481 systemd-logind[1440]: Removed session 8. Nov 8 00:29:47.568106 containerd[1463]: time="2025-11-08T00:29:47.568030500Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:47.569739 containerd[1463]: time="2025-11-08T00:29:47.569659496Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:29:47.570130 containerd[1463]: time="2025-11-08T00:29:47.569689197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:29:47.570462 kubelet[2553]: E1108 00:29:47.570405 2553 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:29:47.571178 kubelet[2553]: E1108 00:29:47.570480 2553 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:29:47.571178 kubelet[2553]: E1108 00:29:47.570688 2553 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jpvzx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-969f74cdf-w6w2r_calico-apiserver(f4df0bd6-9275-4ffb-bc86-7dbb94791082): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:47.572187 kubelet[2553]: E1108 00:29:47.571889 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-969f74cdf-w6w2r" podUID="f4df0bd6-9275-4ffb-bc86-7dbb94791082" Nov 8 00:29:48.402711 containerd[1463]: time="2025-11-08T00:29:48.402375811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:29:48.557342 containerd[1463]: time="2025-11-08T00:29:48.557256075Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:48.559090 containerd[1463]: time="2025-11-08T00:29:48.558975702Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:29:48.559090 containerd[1463]: time="2025-11-08T00:29:48.558972158Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:29:48.559365 kubelet[2553]: E1108 00:29:48.559306 2553 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:29:48.559498 kubelet[2553]: E1108 00:29:48.559374 2553 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:29:48.559637 kubelet[2553]: E1108 00:29:48.559546 2553 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xcp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-969f74cdf-2lhrt_calico-apiserver(06ee44fb-81f0-4173-813e-506c57500250): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:48.561305 kubelet[2553]: E1108 00:29:48.561243 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-969f74cdf-2lhrt" podUID="06ee44fb-81f0-4173-813e-506c57500250" Nov 8 00:29:50.402886 containerd[1463]: time="2025-11-08T00:29:50.402612813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:29:50.563989 containerd[1463]: time="2025-11-08T00:29:50.563921320Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:50.565790 containerd[1463]: time="2025-11-08T00:29:50.565704457Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:29:50.565991 containerd[1463]: time="2025-11-08T00:29:50.565751350Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:29:50.566092 kubelet[2553]: E1108 00:29:50.566012 2553 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:29:50.567220 kubelet[2553]: E1108 00:29:50.566087 2553 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:29:50.567220 kubelet[2553]: E1108 00:29:50.566279 2553 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q6hps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kn6nq_calico-system(5e54b7a9-1c64-4152-ae7f-d4eec2188483): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:50.569278 containerd[1463]: time="2025-11-08T00:29:50.568745474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:29:50.737073 containerd[1463]: time="2025-11-08T00:29:50.736900410Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:50.967466 containerd[1463]: time="2025-11-08T00:29:50.967035685Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:29:50.967466 containerd[1463]: time="2025-11-08T00:29:50.967111337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:29:50.967706 kubelet[2553]: E1108 00:29:50.967558 2553 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:29:50.967706 kubelet[2553]: E1108 00:29:50.967624 2553 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:29:50.967871 kubelet[2553]: E1108 00:29:50.967813 2553 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q6hps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kn6nq_calico-system(5e54b7a9-1c64-4152-ae7f-d4eec2188483): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:50.969587 kubelet[2553]: E1108 00:29:50.969505 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kn6nq" podUID="5e54b7a9-1c64-4152-ae7f-d4eec2188483" Nov 8 00:29:51.410300 kubelet[2553]: E1108 00:29:51.410148 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c54fb4dfc-znnpn" podUID="5df9e569-a84b-4372-9851-dc5eac1e2252" Nov 8 00:29:52.554176 systemd[1]: Started sshd@8-10.128.0.61:22-139.178.89.65:58410.service - OpenSSH per-connection server daemon (139.178.89.65:58410). Nov 8 00:29:52.845916 sshd[5291]: Accepted publickey for core from 139.178.89.65 port 58410 ssh2: RSA SHA256:ogwUEVYP/0oM1umZiqSKcXs0yHGk8j4B+2jq1gLm8nQ Nov 8 00:29:52.848310 sshd[5291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:52.854994 systemd-logind[1440]: New session 9 of user core. Nov 8 00:29:52.862997 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:29:53.140839 sshd[5291]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:53.145547 systemd[1]: sshd@8-10.128.0.61:22-139.178.89.65:58410.service: Deactivated successfully. Nov 8 00:29:53.149792 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:29:53.152134 systemd-logind[1440]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:29:53.154496 systemd-logind[1440]: Removed session 9. Nov 8 00:29:56.403057 kubelet[2553]: E1108 00:29:56.402964 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66784f75f9-twnqc" podUID="e59ce4b6-f87c-444d-abb1-31c4a685274a" Nov 8 00:29:57.402920 kubelet[2553]: E1108 00:29:57.402785 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-b28gg" podUID="418020d8-f003-4c3d-bcf6-3368810f5d40" Nov 8 00:29:58.199193 systemd[1]: Started sshd@9-10.128.0.61:22-139.178.89.65:37454.service - OpenSSH per-connection server daemon (139.178.89.65:37454). Nov 8 00:29:58.402543 kubelet[2553]: E1108 00:29:58.402479 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-969f74cdf-w6w2r" podUID="f4df0bd6-9275-4ffb-bc86-7dbb94791082" Nov 8 00:29:58.489954 sshd[5335]: Accepted publickey for core from 139.178.89.65 port 37454 ssh2: RSA SHA256:ogwUEVYP/0oM1umZiqSKcXs0yHGk8j4B+2jq1gLm8nQ Nov 8 00:29:58.492208 sshd[5335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:58.505136 systemd-logind[1440]: New session 10 of user core. Nov 8 00:29:58.511986 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:29:58.790634 sshd[5335]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:58.798215 systemd-logind[1440]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:29:58.799554 systemd[1]: sshd@9-10.128.0.61:22-139.178.89.65:37454.service: Deactivated successfully. Nov 8 00:29:58.803021 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:29:58.804342 systemd-logind[1440]: Removed session 10. Nov 8 00:29:58.849145 systemd[1]: Started sshd@10-10.128.0.61:22-139.178.89.65:37466.service - OpenSSH per-connection server daemon (139.178.89.65:37466). Nov 8 00:29:59.136370 sshd[5350]: Accepted publickey for core from 139.178.89.65 port 37466 ssh2: RSA SHA256:ogwUEVYP/0oM1umZiqSKcXs0yHGk8j4B+2jq1gLm8nQ Nov 8 00:29:59.138400 sshd[5350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:59.145254 systemd-logind[1440]: New session 11 of user core. Nov 8 00:29:59.153957 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:29:59.479132 sshd[5350]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:59.485700 systemd-logind[1440]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:29:59.487090 systemd[1]: sshd@10-10.128.0.61:22-139.178.89.65:37466.service: Deactivated successfully. Nov 8 00:29:59.493859 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:29:59.495448 systemd-logind[1440]: Removed session 11. Nov 8 00:29:59.535138 systemd[1]: Started sshd@11-10.128.0.61:22-139.178.89.65:37476.service - OpenSSH per-connection server daemon (139.178.89.65:37476). Nov 8 00:29:59.826178 sshd[5361]: Accepted publickey for core from 139.178.89.65 port 37476 ssh2: RSA SHA256:ogwUEVYP/0oM1umZiqSKcXs0yHGk8j4B+2jq1gLm8nQ Nov 8 00:29:59.827134 sshd[5361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:59.836456 systemd-logind[1440]: New session 12 of user core. Nov 8 00:29:59.841741 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:30:00.120277 sshd[5361]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:00.127018 systemd[1]: sshd@11-10.128.0.61:22-139.178.89.65:37476.service: Deactivated successfully. Nov 8 00:30:00.130508 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:30:00.132258 systemd-logind[1440]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:30:00.134381 systemd-logind[1440]: Removed session 12. Nov 8 00:30:01.405854 kubelet[2553]: E1108 00:30:01.405659 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-969f74cdf-2lhrt" podUID="06ee44fb-81f0-4173-813e-506c57500250" Nov 8 00:30:04.403956 containerd[1463]: time="2025-11-08T00:30:04.403021187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:30:04.406196 kubelet[2553]: E1108 00:30:04.405367 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kn6nq" podUID="5e54b7a9-1c64-4152-ae7f-d4eec2188483" Nov 8 00:30:04.616645 containerd[1463]: time="2025-11-08T00:30:04.616573661Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:04.618265 containerd[1463]: time="2025-11-08T00:30:04.618201648Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:30:04.618480 containerd[1463]: time="2025-11-08T00:30:04.618244964Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:30:04.618860 kubelet[2553]: E1108 00:30:04.618784 2553 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:30:04.618995 kubelet[2553]: E1108 00:30:04.618866 2553 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:30:04.619239 kubelet[2553]: E1108 00:30:04.619154 2553 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:f246e03119bb4746838713acdb1a11df,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lbvcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c54fb4dfc-znnpn_calico-system(5df9e569-a84b-4372-9851-dc5eac1e2252): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:04.623478 containerd[1463]: time="2025-11-08T00:30:04.623421312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:30:04.789560 containerd[1463]: time="2025-11-08T00:30:04.789404712Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:04.791101 containerd[1463]: time="2025-11-08T00:30:04.790946945Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:30:04.791101 containerd[1463]: time="2025-11-08T00:30:04.791056360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:30:04.792138 kubelet[2553]: E1108 00:30:04.792080 2553 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:30:04.792289 kubelet[2553]: E1108 00:30:04.792150 2553 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:30:04.792377 kubelet[2553]: E1108 00:30:04.792314 2553 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lbvcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-c54fb4dfc-znnpn_calico-system(5df9e569-a84b-4372-9851-dc5eac1e2252): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:04.793887 kubelet[2553]: E1108 00:30:04.793822 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c54fb4dfc-znnpn" podUID="5df9e569-a84b-4372-9851-dc5eac1e2252" Nov 8 00:30:05.181184 systemd[1]: Started sshd@12-10.128.0.61:22-139.178.89.65:37488.service - OpenSSH per-connection server daemon (139.178.89.65:37488). Nov 8 00:30:05.482638 sshd[5376]: Accepted publickey for core from 139.178.89.65 port 37488 ssh2: RSA SHA256:ogwUEVYP/0oM1umZiqSKcXs0yHGk8j4B+2jq1gLm8nQ Nov 8 00:30:05.484480 sshd[5376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:05.494786 systemd-logind[1440]: New session 13 of user core. Nov 8 00:30:05.498317 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:30:05.792031 sshd[5376]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:05.798552 systemd[1]: sshd@12-10.128.0.61:22-139.178.89.65:37488.service: Deactivated successfully. Nov 8 00:30:05.804857 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:30:05.807629 systemd-logind[1440]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:30:05.810334 systemd-logind[1440]: Removed session 13. Nov 8 00:30:07.409114 containerd[1463]: time="2025-11-08T00:30:07.408635887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:30:07.574707 containerd[1463]: time="2025-11-08T00:30:07.574437627Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:07.576369 containerd[1463]: time="2025-11-08T00:30:07.576219468Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:30:07.576369 containerd[1463]: time="2025-11-08T00:30:07.576318129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:30:07.577224 kubelet[2553]: E1108 00:30:07.576893 2553 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:30:07.577224 kubelet[2553]: E1108 00:30:07.577083 2553 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:30:07.579021 kubelet[2553]: E1108 00:30:07.578862 2553 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2mb96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-66784f75f9-twnqc_calico-system(e59ce4b6-f87c-444d-abb1-31c4a685274a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:07.580180 kubelet[2553]: E1108 00:30:07.580109 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66784f75f9-twnqc" podUID="e59ce4b6-f87c-444d-abb1-31c4a685274a" Nov 8 00:30:08.403276 containerd[1463]: time="2025-11-08T00:30:08.403189382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:30:08.560944 containerd[1463]: time="2025-11-08T00:30:08.560886118Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:08.562588 containerd[1463]: time="2025-11-08T00:30:08.562515616Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:30:08.562784 containerd[1463]: time="2025-11-08T00:30:08.562545140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:30:08.563004 kubelet[2553]: E1108 00:30:08.562934 2553 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:30:08.563100 kubelet[2553]: E1108 00:30:08.563004 2553 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:30:08.563308 kubelet[2553]: E1108 00:30:08.563241 2553 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jzv5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-b28gg_calico-system(418020d8-f003-4c3d-bcf6-3368810f5d40): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:08.564973 kubelet[2553]: E1108 00:30:08.564915 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-b28gg" podUID="418020d8-f003-4c3d-bcf6-3368810f5d40" Nov 8 00:30:10.844210 systemd[1]: Started sshd@13-10.128.0.61:22-139.178.89.65:43300.service - OpenSSH per-connection server daemon (139.178.89.65:43300). Nov 8 00:30:11.136829 sshd[5393]: Accepted publickey for core from 139.178.89.65 port 43300 ssh2: RSA SHA256:ogwUEVYP/0oM1umZiqSKcXs0yHGk8j4B+2jq1gLm8nQ Nov 8 00:30:11.138529 sshd[5393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:11.145807 systemd-logind[1440]: New session 14 of user core. Nov 8 00:30:11.155007 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:30:11.431550 sshd[5393]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:11.437911 systemd[1]: sshd@13-10.128.0.61:22-139.178.89.65:43300.service: Deactivated successfully. Nov 8 00:30:11.441217 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:30:11.442362 systemd-logind[1440]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:30:11.444428 systemd-logind[1440]: Removed session 14. Nov 8 00:30:12.403975 containerd[1463]: time="2025-11-08T00:30:12.403925005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:30:12.578070 containerd[1463]: time="2025-11-08T00:30:12.578003641Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:12.579762 containerd[1463]: time="2025-11-08T00:30:12.579582153Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:30:12.579762 containerd[1463]: time="2025-11-08T00:30:12.579628816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:30:12.580292 kubelet[2553]: E1108 00:30:12.580226 2553 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:30:12.581048 kubelet[2553]: E1108 00:30:12.580301 2553 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:30:12.581048 kubelet[2553]: E1108 00:30:12.580538 2553 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jpvzx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-969f74cdf-w6w2r_calico-apiserver(f4df0bd6-9275-4ffb-bc86-7dbb94791082): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:12.582296 kubelet[2553]: E1108 00:30:12.582234 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-969f74cdf-w6w2r" podUID="f4df0bd6-9275-4ffb-bc86-7dbb94791082" Nov 8 00:30:14.403987 containerd[1463]: time="2025-11-08T00:30:14.403896499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:30:14.708360 containerd[1463]: time="2025-11-08T00:30:14.707983015Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:14.710304 containerd[1463]: time="2025-11-08T00:30:14.710191498Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:30:14.710573 containerd[1463]: time="2025-11-08T00:30:14.710257884Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:30:14.710644 kubelet[2553]: E1108 00:30:14.710536 2553 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:30:14.710644 kubelet[2553]: E1108 00:30:14.710605 2553 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:30:14.711262 kubelet[2553]: E1108 00:30:14.710804 2553 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xcp7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-969f74cdf-2lhrt_calico-apiserver(06ee44fb-81f0-4173-813e-506c57500250): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:14.712557 kubelet[2553]: E1108 00:30:14.712408 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-969f74cdf-2lhrt" podUID="06ee44fb-81f0-4173-813e-506c57500250" Nov 8 00:30:16.403281 kubelet[2553]: E1108 00:30:16.403190 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c54fb4dfc-znnpn" podUID="5df9e569-a84b-4372-9851-dc5eac1e2252" Nov 8 00:30:16.488204 systemd[1]: Started sshd@14-10.128.0.61:22-139.178.89.65:59938.service - OpenSSH per-connection server daemon (139.178.89.65:59938). Nov 8 00:30:16.774874 sshd[5414]: Accepted publickey for core from 139.178.89.65 port 59938 ssh2: RSA SHA256:ogwUEVYP/0oM1umZiqSKcXs0yHGk8j4B+2jq1gLm8nQ Nov 8 00:30:16.775968 sshd[5414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:16.782807 systemd-logind[1440]: New session 15 of user core. Nov 8 00:30:16.788949 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:30:17.071006 sshd[5414]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:17.077535 systemd[1]: sshd@14-10.128.0.61:22-139.178.89.65:59938.service: Deactivated successfully. Nov 8 00:30:17.082462 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:30:17.083809 systemd-logind[1440]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:30:17.085810 systemd-logind[1440]: Removed session 15. Nov 8 00:30:17.404038 containerd[1463]: time="2025-11-08T00:30:17.403362871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:30:17.592825 containerd[1463]: time="2025-11-08T00:30:17.592743461Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:17.594354 containerd[1463]: time="2025-11-08T00:30:17.594302043Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:30:17.594513 containerd[1463]: time="2025-11-08T00:30:17.594407216Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:30:17.594825 kubelet[2553]: E1108 00:30:17.594753 2553 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:30:17.594825 kubelet[2553]: E1108 00:30:17.594822 2553 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:30:17.595906 kubelet[2553]: E1108 00:30:17.595018 2553 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q6hps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kn6nq_calico-system(5e54b7a9-1c64-4152-ae7f-d4eec2188483): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:17.598230 containerd[1463]: time="2025-11-08T00:30:17.597906843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:30:17.758045 containerd[1463]: time="2025-11-08T00:30:17.757894523Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:17.759942 containerd[1463]: time="2025-11-08T00:30:17.759875564Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:30:17.760133 containerd[1463]: time="2025-11-08T00:30:17.760007726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:30:17.760304 kubelet[2553]: E1108 00:30:17.760247 2553 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:30:17.760466 kubelet[2553]: E1108 00:30:17.760323 2553 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:30:17.761002 kubelet[2553]: E1108 00:30:17.760526 2553 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q6hps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kn6nq_calico-system(5e54b7a9-1c64-4152-ae7f-d4eec2188483): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:17.761840 kubelet[2553]: E1108 00:30:17.761782 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kn6nq" podUID="5e54b7a9-1c64-4152-ae7f-d4eec2188483" Nov 8 00:30:20.401909 kubelet[2553]: E1108 00:30:20.401525 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-b28gg" podUID="418020d8-f003-4c3d-bcf6-3368810f5d40" Nov 8 00:30:21.405674 kubelet[2553]: E1108 00:30:21.405141 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66784f75f9-twnqc" podUID="e59ce4b6-f87c-444d-abb1-31c4a685274a" Nov 8 00:30:22.131197 systemd[1]: Started sshd@15-10.128.0.61:22-139.178.89.65:59954.service - OpenSSH per-connection server daemon (139.178.89.65:59954). Nov 8 00:30:22.422558 sshd[5429]: Accepted publickey for core from 139.178.89.65 port 59954 ssh2: RSA SHA256:ogwUEVYP/0oM1umZiqSKcXs0yHGk8j4B+2jq1gLm8nQ Nov 8 00:30:22.424461 sshd[5429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:22.430818 systemd-logind[1440]: New session 16 of user core. Nov 8 00:30:22.439045 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:30:22.714969 sshd[5429]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:22.719954 systemd[1]: sshd@15-10.128.0.61:22-139.178.89.65:59954.service: Deactivated successfully. Nov 8 00:30:22.722647 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:30:22.725221 systemd-logind[1440]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:30:22.727566 systemd-logind[1440]: Removed session 16. Nov 8 00:30:22.778917 systemd[1]: Started sshd@16-10.128.0.61:22-139.178.89.65:59970.service - OpenSSH per-connection server daemon (139.178.89.65:59970). Nov 8 00:30:23.072420 sshd[5442]: Accepted publickey for core from 139.178.89.65 port 59970 ssh2: RSA SHA256:ogwUEVYP/0oM1umZiqSKcXs0yHGk8j4B+2jq1gLm8nQ Nov 8 00:30:23.074668 sshd[5442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:23.081389 systemd-logind[1440]: New session 17 of user core. Nov 8 00:30:23.087944 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:30:23.450591 sshd[5442]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:23.456602 systemd[1]: sshd@16-10.128.0.61:22-139.178.89.65:59970.service: Deactivated successfully. Nov 8 00:30:23.460627 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:30:23.461981 systemd-logind[1440]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:30:23.463547 systemd-logind[1440]: Removed session 17. Nov 8 00:30:23.507161 systemd[1]: Started sshd@17-10.128.0.61:22-139.178.89.65:59978.service - OpenSSH per-connection server daemon (139.178.89.65:59978). Nov 8 00:30:23.800338 sshd[5453]: Accepted publickey for core from 139.178.89.65 port 59978 ssh2: RSA SHA256:ogwUEVYP/0oM1umZiqSKcXs0yHGk8j4B+2jq1gLm8nQ Nov 8 00:30:23.802663 sshd[5453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:23.810346 systemd-logind[1440]: New session 18 of user core. Nov 8 00:30:23.818053 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:30:24.683218 sshd[5453]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:24.693762 systemd[1]: sshd@17-10.128.0.61:22-139.178.89.65:59978.service: Deactivated successfully. Nov 8 00:30:24.701132 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:30:24.702385 systemd-logind[1440]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:30:24.704442 systemd-logind[1440]: Removed session 18. Nov 8 00:30:24.737558 systemd[1]: Started sshd@18-10.128.0.61:22-139.178.89.65:59992.service - OpenSSH per-connection server daemon (139.178.89.65:59992). Nov 8 00:30:25.033600 sshd[5471]: Accepted publickey for core from 139.178.89.65 port 59992 ssh2: RSA SHA256:ogwUEVYP/0oM1umZiqSKcXs0yHGk8j4B+2jq1gLm8nQ Nov 8 00:30:25.035954 sshd[5471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:25.048180 systemd-logind[1440]: New session 19 of user core. Nov 8 00:30:25.053968 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:30:25.477066 sshd[5471]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:25.482331 systemd[1]: sshd@18-10.128.0.61:22-139.178.89.65:59992.service: Deactivated successfully. Nov 8 00:30:25.486154 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:30:25.489135 systemd-logind[1440]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:30:25.491861 systemd-logind[1440]: Removed session 19. Nov 8 00:30:25.531405 systemd[1]: Started sshd@19-10.128.0.61:22-139.178.89.65:59998.service - OpenSSH per-connection server daemon (139.178.89.65:59998). Nov 8 00:30:25.829836 sshd[5504]: Accepted publickey for core from 139.178.89.65 port 59998 ssh2: RSA SHA256:ogwUEVYP/0oM1umZiqSKcXs0yHGk8j4B+2jq1gLm8nQ Nov 8 00:30:25.834763 sshd[5504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:25.850010 systemd-logind[1440]: New session 20 of user core. Nov 8 00:30:25.854925 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 00:30:26.123952 sshd[5504]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:26.129051 systemd[1]: sshd@19-10.128.0.61:22-139.178.89.65:59998.service: Deactivated successfully. Nov 8 00:30:26.132664 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 00:30:26.135124 systemd-logind[1440]: Session 20 logged out. Waiting for processes to exit. Nov 8 00:30:26.136965 systemd-logind[1440]: Removed session 20. Nov 8 00:30:26.402855 kubelet[2553]: E1108 00:30:26.402349 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-969f74cdf-w6w2r" podUID="f4df0bd6-9275-4ffb-bc86-7dbb94791082" Nov 8 00:30:27.405368 kubelet[2553]: E1108 00:30:27.405307 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-969f74cdf-2lhrt" podUID="06ee44fb-81f0-4173-813e-506c57500250" Nov 8 00:30:30.403035 kubelet[2553]: E1108 00:30:30.402956 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c54fb4dfc-znnpn" podUID="5df9e569-a84b-4372-9851-dc5eac1e2252" Nov 8 00:30:31.181177 systemd[1]: Started sshd@20-10.128.0.61:22-139.178.89.65:55368.service - OpenSSH per-connection server daemon (139.178.89.65:55368). Nov 8 00:30:31.405285 kubelet[2553]: E1108 00:30:31.405169 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kn6nq" podUID="5e54b7a9-1c64-4152-ae7f-d4eec2188483" Nov 8 00:30:31.486862 sshd[5518]: Accepted publickey for core from 139.178.89.65 port 55368 ssh2: RSA SHA256:ogwUEVYP/0oM1umZiqSKcXs0yHGk8j4B+2jq1gLm8nQ Nov 8 00:30:31.487869 sshd[5518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:31.499339 systemd-logind[1440]: New session 21 of user core. Nov 8 00:30:31.504983 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 00:30:31.776947 sshd[5518]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:31.784120 systemd[1]: sshd@20-10.128.0.61:22-139.178.89.65:55368.service: Deactivated successfully. Nov 8 00:30:31.789224 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 00:30:31.790853 systemd-logind[1440]: Session 21 logged out. Waiting for processes to exit. Nov 8 00:30:31.794041 systemd-logind[1440]: Removed session 21. Nov 8 00:30:33.405272 kubelet[2553]: E1108 00:30:33.404191 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-b28gg" podUID="418020d8-f003-4c3d-bcf6-3368810f5d40" Nov 8 00:30:34.403229 kubelet[2553]: E1108 00:30:34.403100 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66784f75f9-twnqc" podUID="e59ce4b6-f87c-444d-abb1-31c4a685274a" Nov 8 00:30:36.834150 systemd[1]: Started sshd@21-10.128.0.61:22-139.178.89.65:34104.service - OpenSSH per-connection server daemon (139.178.89.65:34104). Nov 8 00:30:37.126526 sshd[5532]: Accepted publickey for core from 139.178.89.65 port 34104 ssh2: RSA SHA256:ogwUEVYP/0oM1umZiqSKcXs0yHGk8j4B+2jq1gLm8nQ Nov 8 00:30:37.128508 sshd[5532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:37.135234 systemd-logind[1440]: New session 22 of user core. Nov 8 00:30:37.137999 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 8 00:30:37.422345 sshd[5532]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:37.427863 systemd[1]: sshd@21-10.128.0.61:22-139.178.89.65:34104.service: Deactivated successfully. Nov 8 00:30:37.431095 systemd[1]: session-22.scope: Deactivated successfully. Nov 8 00:30:37.433458 systemd-logind[1440]: Session 22 logged out. Waiting for processes to exit. Nov 8 00:30:37.435505 systemd-logind[1440]: Removed session 22. Nov 8 00:30:40.405216 kubelet[2553]: E1108 00:30:40.404959 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-969f74cdf-w6w2r" podUID="f4df0bd6-9275-4ffb-bc86-7dbb94791082" Nov 8 00:30:41.404878 kubelet[2553]: E1108 00:30:41.404826 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-969f74cdf-2lhrt" podUID="06ee44fb-81f0-4173-813e-506c57500250" Nov 8 00:30:42.485788 systemd[1]: Started sshd@22-10.128.0.61:22-139.178.89.65:34112.service - OpenSSH per-connection server daemon (139.178.89.65:34112). Nov 8 00:30:42.795151 sshd[5547]: Accepted publickey for core from 139.178.89.65 port 34112 ssh2: RSA SHA256:ogwUEVYP/0oM1umZiqSKcXs0yHGk8j4B+2jq1gLm8nQ Nov 8 00:30:42.798777 sshd[5547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:42.806087 systemd-logind[1440]: New session 23 of user core. Nov 8 00:30:42.813057 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 8 00:30:43.207925 sshd[5547]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:43.214893 systemd[1]: sshd@22-10.128.0.61:22-139.178.89.65:34112.service: Deactivated successfully. Nov 8 00:30:43.219224 systemd[1]: session-23.scope: Deactivated successfully. Nov 8 00:30:43.222660 systemd-logind[1440]: Session 23 logged out. Waiting for processes to exit. Nov 8 00:30:43.225766 systemd-logind[1440]: Removed session 23. Nov 8 00:30:43.412097 kubelet[2553]: E1108 00:30:43.411785 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-c54fb4dfc-znnpn" podUID="5df9e569-a84b-4372-9851-dc5eac1e2252" Nov 8 00:30:43.412097 kubelet[2553]: E1108 00:30:43.411923 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kn6nq" podUID="5e54b7a9-1c64-4152-ae7f-d4eec2188483" Nov 8 00:30:46.404286 kubelet[2553]: E1108 00:30:46.404224 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-b28gg" podUID="418020d8-f003-4c3d-bcf6-3368810f5d40" Nov 8 00:30:48.269312 systemd[1]: Started sshd@23-10.128.0.61:22-139.178.89.65:40534.service - OpenSSH per-connection server daemon (139.178.89.65:40534). Nov 8 00:30:48.404339 containerd[1463]: time="2025-11-08T00:30:48.404023331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:30:48.587114 containerd[1463]: time="2025-11-08T00:30:48.586955956Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:48.590646 containerd[1463]: time="2025-11-08T00:30:48.590414430Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:30:48.590646 containerd[1463]: time="2025-11-08T00:30:48.590452495Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:30:48.590955 sshd[5563]: Accepted publickey for core from 139.178.89.65 port 40534 ssh2: RSA SHA256:ogwUEVYP/0oM1umZiqSKcXs0yHGk8j4B+2jq1gLm8nQ Nov 8 00:30:48.591460 kubelet[2553]: E1108 00:30:48.590779 2553 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:30:48.591460 kubelet[2553]: E1108 00:30:48.590844 2553 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:30:48.591460 kubelet[2553]: E1108 00:30:48.591033 2553 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2mb96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-66784f75f9-twnqc_calico-system(e59ce4b6-f87c-444d-abb1-31c4a685274a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:48.593852 kubelet[2553]: E1108 00:30:48.592953 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66784f75f9-twnqc" podUID="e59ce4b6-f87c-444d-abb1-31c4a685274a" Nov 8 00:30:48.593069 sshd[5563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:48.604047 systemd-logind[1440]: New session 24 of user core. Nov 8 00:30:48.610588 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 8 00:30:48.969089 sshd[5563]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:48.979405 systemd[1]: sshd@23-10.128.0.61:22-139.178.89.65:40534.service: Deactivated successfully. Nov 8 00:30:48.985979 systemd[1]: session-24.scope: Deactivated successfully. Nov 8 00:30:48.989125 systemd-logind[1440]: Session 24 logged out. Waiting for processes to exit. Nov 8 00:30:48.995048 systemd-logind[1440]: Removed session 24.