Nov 8 00:29:04.116474 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:29:04.116495 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:29:04.116503 kernel: BIOS-provided physical RAM map: Nov 8 00:29:04.116509 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 8 00:29:04.116514 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 8 00:29:04.116519 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 8 00:29:04.116526 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Nov 8 00:29:04.116531 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Nov 8 00:29:04.116538 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 8 00:29:04.116544 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 8 00:29:04.116549 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 8 00:29:04.116555 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 8 00:29:04.116560 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 8 00:29:04.116566 kernel: NX (Execute Disable) protection: active Nov 8 00:29:04.116574 kernel: APIC: Static calls initialized Nov 8 00:29:04.116580 kernel: SMBIOS 3.0.0 present. Nov 8 00:29:04.116587 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Nov 8 00:29:04.116593 kernel: Hypervisor detected: KVM Nov 8 00:29:04.116598 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 8 00:29:04.116604 kernel: kvm-clock: using sched offset of 3690760737 cycles Nov 8 00:29:04.116611 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 8 00:29:04.116617 kernel: tsc: Detected 2495.312 MHz processor Nov 8 00:29:04.116624 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:29:04.116632 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:29:04.116638 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Nov 8 00:29:04.116644 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 8 00:29:04.116650 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:29:04.116657 kernel: Using GB pages for direct mapping Nov 8 00:29:04.116663 kernel: ACPI: Early table checksum verification disabled Nov 8 00:29:04.116669 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Nov 8 00:29:04.116675 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:29:04.116681 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:29:04.116689 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:29:04.116695 kernel: ACPI: FACS 0x000000007CFE0000 000040 Nov 8 00:29:04.116701 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:29:04.116707 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:29:04.116713 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:29:04.116720 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:29:04.116726 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] Nov 8 00:29:04.116732 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] Nov 8 00:29:04.116742 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Nov 8 00:29:04.116748 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] Nov 8 00:29:04.116755 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] Nov 8 00:29:04.116761 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] Nov 8 00:29:04.116768 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] Nov 8 00:29:04.116774 kernel: No NUMA configuration found Nov 8 00:29:04.116782 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Nov 8 00:29:04.116788 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Nov 8 00:29:04.116795 kernel: Zone ranges: Nov 8 00:29:04.116801 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:29:04.116808 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Nov 8 00:29:04.116814 kernel: Normal empty Nov 8 00:29:04.116821 kernel: Movable zone start for each node Nov 8 00:29:04.116827 kernel: Early memory node ranges Nov 8 00:29:04.116833 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 8 00:29:04.116840 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Nov 8 00:29:04.116848 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Nov 8 00:29:04.116854 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:29:04.116860 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 8 00:29:04.116867 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 8 00:29:04.116873 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 8 00:29:04.116880 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 8 00:29:04.116886 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:29:04.116893 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 8 00:29:04.116899 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 8 00:29:04.116907 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:29:04.116913 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 8 00:29:04.116920 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 8 00:29:04.116926 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:29:04.116933 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 8 00:29:04.116939 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 8 00:29:04.116946 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 8 00:29:04.116952 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 8 00:29:04.116959 kernel: Booting paravirtualized kernel on KVM Nov 8 00:29:04.116967 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:29:04.116974 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 8 00:29:04.116980 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 8 00:29:04.116987 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 8 00:29:04.116993 kernel: pcpu-alloc: [0] 0 1 Nov 8 00:29:04.116999 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 8 00:29:04.117025 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:29:04.117032 kernel: random: crng init done Nov 8 00:29:04.117040 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:29:04.117047 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 8 00:29:04.117053 kernel: Fallback order for Node 0: 0 Nov 8 00:29:04.117059 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Nov 8 00:29:04.117066 kernel: Policy zone: DMA32 Nov 8 00:29:04.117074 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:29:04.117082 kernel: Memory: 1922052K/2047464K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 125152K reserved, 0K cma-reserved) Nov 8 00:29:04.117089 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 8 00:29:04.117097 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:29:04.117105 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:29:04.117111 kernel: Dynamic Preempt: voluntary Nov 8 00:29:04.117118 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:29:04.117125 kernel: rcu: RCU event tracing is enabled. Nov 8 00:29:04.117131 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 8 00:29:04.117138 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:29:04.117144 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:29:04.117151 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:29:04.117157 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:29:04.117164 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 8 00:29:04.117172 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 8 00:29:04.117178 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:29:04.117195 kernel: Console: colour VGA+ 80x25 Nov 8 00:29:04.117202 kernel: printk: console [tty0] enabled Nov 8 00:29:04.117209 kernel: printk: console [ttyS0] enabled Nov 8 00:29:04.117215 kernel: ACPI: Core revision 20230628 Nov 8 00:29:04.117222 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 8 00:29:04.117228 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:29:04.117235 kernel: x2apic enabled Nov 8 00:29:04.117243 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:29:04.117250 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 8 00:29:04.117256 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 8 00:29:04.117263 kernel: Calibrating delay loop (skipped) preset value.. 4990.62 BogoMIPS (lpj=2495312) Nov 8 00:29:04.117269 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 8 00:29:04.117276 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 8 00:29:04.117282 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 8 00:29:04.117289 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:29:04.117301 kernel: Spectre V2 : Mitigation: Retpolines Nov 8 00:29:04.117308 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:29:04.117315 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 8 00:29:04.117323 kernel: active return thunk: retbleed_return_thunk Nov 8 00:29:04.117329 kernel: RETBleed: Mitigation: untrained return thunk Nov 8 00:29:04.117337 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 00:29:04.117343 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 00:29:04.117351 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:29:04.117359 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:29:04.117366 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:29:04.117373 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:29:04.117380 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 8 00:29:04.117387 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:29:04.117393 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:29:04.117400 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:29:04.117407 kernel: landlock: Up and running. Nov 8 00:29:04.117413 kernel: SELinux: Initializing. Nov 8 00:29:04.117422 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:29:04.117429 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:29:04.117436 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 8 00:29:04.117442 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:29:04.117449 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:29:04.117456 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:29:04.117463 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 8 00:29:04.117470 kernel: ... version: 0 Nov 8 00:29:04.117477 kernel: ... bit width: 48 Nov 8 00:29:04.117485 kernel: ... generic registers: 6 Nov 8 00:29:04.117491 kernel: ... value mask: 0000ffffffffffff Nov 8 00:29:04.117498 kernel: ... max period: 00007fffffffffff Nov 8 00:29:04.117505 kernel: ... fixed-purpose events: 0 Nov 8 00:29:04.117512 kernel: ... event mask: 000000000000003f Nov 8 00:29:04.117518 kernel: signal: max sigframe size: 1776 Nov 8 00:29:04.117525 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:29:04.117532 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:29:04.117539 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:29:04.117547 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:29:04.117554 kernel: .... node #0, CPUs: #1 Nov 8 00:29:04.117560 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:29:04.117567 kernel: smpboot: Max logical packages: 1 Nov 8 00:29:04.117574 kernel: smpboot: Total of 2 processors activated (9981.24 BogoMIPS) Nov 8 00:29:04.117581 kernel: devtmpfs: initialized Nov 8 00:29:04.117587 kernel: x86/mm: Memory block size: 128MB Nov 8 00:29:04.117594 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:29:04.117601 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 8 00:29:04.117609 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:29:04.117616 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:29:04.117623 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:29:04.117630 kernel: audit: type=2000 audit(1762561742.709:1): state=initialized audit_enabled=0 res=1 Nov 8 00:29:04.117636 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:29:04.117643 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:29:04.117650 kernel: cpuidle: using governor menu Nov 8 00:29:04.117657 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:29:04.117663 kernel: dca service started, version 1.12.1 Nov 8 00:29:04.117672 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 8 00:29:04.117678 kernel: PCI: Using configuration type 1 for base access Nov 8 00:29:04.117685 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:29:04.117692 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:29:04.117699 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:29:04.117706 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:29:04.117713 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:29:04.117719 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:29:04.117726 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:29:04.117734 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:29:04.117741 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:29:04.117748 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:29:04.117755 kernel: ACPI: Interpreter enabled Nov 8 00:29:04.117761 kernel: ACPI: PM: (supports S0 S5) Nov 8 00:29:04.117768 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:29:04.117775 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:29:04.117782 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 00:29:04.117789 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 8 00:29:04.117797 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 8 00:29:04.117920 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:29:04.118000 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 8 00:29:04.118090 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 8 00:29:04.118100 kernel: PCI host bridge to bus 0000:00 Nov 8 00:29:04.118175 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:29:04.118264 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 8 00:29:04.118332 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:29:04.118395 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Nov 8 00:29:04.118474 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 8 00:29:04.118551 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 8 00:29:04.118614 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 8 00:29:04.118700 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 8 00:29:04.118787 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Nov 8 00:29:04.118859 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Nov 8 00:29:04.118929 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Nov 8 00:29:04.119000 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Nov 8 00:29:04.119088 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Nov 8 00:29:04.119162 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:29:04.119260 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Nov 8 00:29:04.119338 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Nov 8 00:29:04.119419 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Nov 8 00:29:04.119491 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Nov 8 00:29:04.119570 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Nov 8 00:29:04.119642 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Nov 8 00:29:04.120230 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Nov 8 00:29:04.120329 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Nov 8 00:29:04.120408 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Nov 8 00:29:04.120481 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Nov 8 00:29:04.120559 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Nov 8 00:29:04.120632 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Nov 8 00:29:04.120709 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Nov 8 00:29:04.120783 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Nov 8 00:29:04.120860 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Nov 8 00:29:04.120930 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Nov 8 00:29:04.121020 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Nov 8 00:29:04.121093 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Nov 8 00:29:04.121170 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 8 00:29:04.121332 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 8 00:29:04.121409 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 8 00:29:04.121480 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Nov 8 00:29:04.121549 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Nov 8 00:29:04.121624 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 8 00:29:04.121694 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 8 00:29:04.121778 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Nov 8 00:29:04.121856 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Nov 8 00:29:04.121930 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Nov 8 00:29:04.122019 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Nov 8 00:29:04.122092 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Nov 8 00:29:04.122164 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Nov 8 00:29:04.122274 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Nov 8 00:29:04.122356 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Nov 8 00:29:04.122441 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Nov 8 00:29:04.122537 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Nov 8 00:29:04.122609 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Nov 8 00:29:04.122679 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 8 00:29:04.122760 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Nov 8 00:29:04.122834 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Nov 8 00:29:04.122913 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Nov 8 00:29:04.122984 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Nov 8 00:29:04.123075 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Nov 8 00:29:04.123146 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 8 00:29:04.123288 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Nov 8 00:29:04.123363 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Nov 8 00:29:04.123432 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Nov 8 00:29:04.123505 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Nov 8 00:29:04.123573 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 8 00:29:04.123651 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Nov 8 00:29:04.123724 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Nov 8 00:29:04.123795 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Nov 8 00:29:04.123866 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Nov 8 00:29:04.123935 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Nov 8 00:29:04.124021 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 8 00:29:04.124105 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Nov 8 00:29:04.124178 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Nov 8 00:29:04.124304 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Nov 8 00:29:04.124407 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Nov 8 00:29:04.125276 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Nov 8 00:29:04.125360 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 8 00:29:04.125369 kernel: acpiphp: Slot [0] registered Nov 8 00:29:04.125455 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Nov 8 00:29:04.125531 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Nov 8 00:29:04.125606 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Nov 8 00:29:04.125681 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Nov 8 00:29:04.125752 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Nov 8 00:29:04.125822 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Nov 8 00:29:04.125892 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 8 00:29:04.125902 kernel: acpiphp: Slot [0-2] registered Nov 8 00:29:04.125975 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Nov 8 00:29:04.126060 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Nov 8 00:29:04.126130 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 8 00:29:04.126139 kernel: acpiphp: Slot [0-3] registered Nov 8 00:29:04.129366 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Nov 8 00:29:04.129462 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 8 00:29:04.129533 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 8 00:29:04.129542 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 8 00:29:04.129554 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 8 00:29:04.129561 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:29:04.129568 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 8 00:29:04.129575 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 8 00:29:04.129583 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 8 00:29:04.129590 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 8 00:29:04.129597 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 8 00:29:04.129604 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 8 00:29:04.129611 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 8 00:29:04.129620 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 8 00:29:04.129627 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 8 00:29:04.129634 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 8 00:29:04.129641 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 8 00:29:04.129648 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 8 00:29:04.129655 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 8 00:29:04.129662 kernel: iommu: Default domain type: Translated Nov 8 00:29:04.129670 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:29:04.129676 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:29:04.129685 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:29:04.129692 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 8 00:29:04.129699 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Nov 8 00:29:04.129773 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 8 00:29:04.129842 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 8 00:29:04.129911 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:29:04.129920 kernel: vgaarb: loaded Nov 8 00:29:04.129927 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 8 00:29:04.129935 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 8 00:29:04.129944 kernel: clocksource: Switched to clocksource kvm-clock Nov 8 00:29:04.129951 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:29:04.129959 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:29:04.129966 kernel: pnp: PnP ACPI init Nov 8 00:29:04.130068 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 8 00:29:04.130080 kernel: pnp: PnP ACPI: found 5 devices Nov 8 00:29:04.130087 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:29:04.130095 kernel: NET: Registered PF_INET protocol family Nov 8 00:29:04.130105 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:29:04.130112 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 8 00:29:04.130119 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:29:04.130126 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 00:29:04.130134 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 8 00:29:04.130142 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 8 00:29:04.130148 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:29:04.130156 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:29:04.130163 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:29:04.130171 kernel: NET: Registered PF_XDP protocol family Nov 8 00:29:04.130270 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 8 00:29:04.130343 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 8 00:29:04.130412 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 8 00:29:04.130498 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Nov 8 00:29:04.130572 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Nov 8 00:29:04.130642 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Nov 8 00:29:04.130715 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Nov 8 00:29:04.130785 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Nov 8 00:29:04.130856 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Nov 8 00:29:04.130926 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Nov 8 00:29:04.130996 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Nov 8 00:29:04.131084 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 8 00:29:04.131155 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Nov 8 00:29:04.131238 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Nov 8 00:29:04.131309 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 8 00:29:04.131396 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Nov 8 00:29:04.131466 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Nov 8 00:29:04.131537 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 8 00:29:04.131608 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Nov 8 00:29:04.131677 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Nov 8 00:29:04.131746 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 8 00:29:04.132114 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Nov 8 00:29:04.132229 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Nov 8 00:29:04.132306 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 8 00:29:04.132377 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Nov 8 00:29:04.132448 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Nov 8 00:29:04.132519 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Nov 8 00:29:04.132608 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 8 00:29:04.132704 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Nov 8 00:29:04.132807 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Nov 8 00:29:04.132896 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Nov 8 00:29:04.132977 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 8 00:29:04.133073 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Nov 8 00:29:04.133146 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Nov 8 00:29:04.135919 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 8 00:29:04.136024 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 8 00:29:04.136096 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 8 00:29:04.136161 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 8 00:29:04.136316 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 8 00:29:04.136380 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Nov 8 00:29:04.136442 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 8 00:29:04.136503 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 8 00:29:04.136580 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Nov 8 00:29:04.136646 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Nov 8 00:29:04.136717 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Nov 8 00:29:04.136783 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Nov 8 00:29:04.136854 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Nov 8 00:29:04.136918 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 8 00:29:04.136994 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Nov 8 00:29:04.137078 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 8 00:29:04.137150 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Nov 8 00:29:04.137284 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 8 00:29:04.137358 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Nov 8 00:29:04.137424 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 8 00:29:04.137498 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Nov 8 00:29:04.137563 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Nov 8 00:29:04.137628 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 8 00:29:04.137699 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Nov 8 00:29:04.137764 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Nov 8 00:29:04.137828 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 8 00:29:04.137899 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Nov 8 00:29:04.137967 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Nov 8 00:29:04.138052 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 8 00:29:04.138064 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 8 00:29:04.138072 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:29:04.138079 kernel: Initialise system trusted keyrings Nov 8 00:29:04.138087 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 8 00:29:04.138095 kernel: Key type asymmetric registered Nov 8 00:29:04.138102 kernel: Asymmetric key parser 'x509' registered Nov 8 00:29:04.138112 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:29:04.138120 kernel: io scheduler mq-deadline registered Nov 8 00:29:04.138127 kernel: io scheduler kyber registered Nov 8 00:29:04.138135 kernel: io scheduler bfq registered Nov 8 00:29:04.139347 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Nov 8 00:29:04.139429 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Nov 8 00:29:04.139500 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Nov 8 00:29:04.139570 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Nov 8 00:29:04.139645 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Nov 8 00:29:04.139715 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Nov 8 00:29:04.139788 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Nov 8 00:29:04.139857 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Nov 8 00:29:04.139927 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Nov 8 00:29:04.139996 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Nov 8 00:29:04.140081 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Nov 8 00:29:04.140150 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Nov 8 00:29:04.141255 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Nov 8 00:29:04.141339 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Nov 8 00:29:04.141412 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Nov 8 00:29:04.141483 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Nov 8 00:29:04.141494 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 8 00:29:04.141563 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Nov 8 00:29:04.141634 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Nov 8 00:29:04.141644 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:29:04.141652 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Nov 8 00:29:04.141662 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:29:04.141670 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:29:04.141678 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 8 00:29:04.141685 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:29:04.141693 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:29:04.141701 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 8 00:29:04.141776 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 8 00:29:04.141844 kernel: rtc_cmos 00:03: registered as rtc0 Nov 8 00:29:04.141912 kernel: rtc_cmos 00:03: setting system clock to 2025-11-08T00:29:03 UTC (1762561743) Nov 8 00:29:04.141977 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 8 00:29:04.141987 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 8 00:29:04.141995 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:29:04.142021 kernel: Segment Routing with IPv6 Nov 8 00:29:04.142029 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:29:04.142036 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:29:04.142044 kernel: Key type dns_resolver registered Nov 8 00:29:04.142052 kernel: IPI shorthand broadcast: enabled Nov 8 00:29:04.142062 kernel: sched_clock: Marking stable (1707014756, 244541410)->(2000421299, -48865133) Nov 8 00:29:04.142070 kernel: registered taskstats version 1 Nov 8 00:29:04.142079 kernel: Loading compiled-in X.509 certificates Nov 8 00:29:04.142087 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:29:04.142094 kernel: Key type .fscrypt registered Nov 8 00:29:04.142102 kernel: Key type fscrypt-provisioning registered Nov 8 00:29:04.142109 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:29:04.142117 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:29:04.142126 kernel: ima: No architecture policies found Nov 8 00:29:04.142135 kernel: clk: Disabling unused clocks Nov 8 00:29:04.142143 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:29:04.142150 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:29:04.142158 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:29:04.142166 kernel: Run /init as init process Nov 8 00:29:04.142174 kernel: with arguments: Nov 8 00:29:04.142181 kernel: /init Nov 8 00:29:04.144214 kernel: with environment: Nov 8 00:29:04.144222 kernel: HOME=/ Nov 8 00:29:04.144232 kernel: TERM=linux Nov 8 00:29:04.144242 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:29:04.144252 systemd[1]: Detected virtualization kvm. Nov 8 00:29:04.144261 systemd[1]: Detected architecture x86-64. Nov 8 00:29:04.144268 systemd[1]: Running in initrd. Nov 8 00:29:04.144276 systemd[1]: No hostname configured, using default hostname. Nov 8 00:29:04.144284 systemd[1]: Hostname set to . Nov 8 00:29:04.144293 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:29:04.144301 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:29:04.144309 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:29:04.144317 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:29:04.144325 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:29:04.144333 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:29:04.144341 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:29:04.144349 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:29:04.144360 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:29:04.144368 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:29:04.144376 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:29:04.144384 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:29:04.144391 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:29:04.144399 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:29:04.144407 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:29:04.144417 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:29:04.144425 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:29:04.144432 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:29:04.144440 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:29:04.144449 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:29:04.144457 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:29:04.144465 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:29:04.144472 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:29:04.144480 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:29:04.144490 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:29:04.144498 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:29:04.144506 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:29:04.144513 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:29:04.144521 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:29:04.144529 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:29:04.144537 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:29:04.144563 systemd-journald[187]: Collecting audit messages is disabled. Nov 8 00:29:04.144586 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:29:04.144594 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:29:04.144602 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:29:04.144612 systemd-journald[187]: Journal started Nov 8 00:29:04.144630 systemd-journald[187]: Runtime Journal (/run/log/journal/90e24579ab95417ea2567db72145614b) is 4.8M, max 38.4M, 33.6M free. Nov 8 00:29:04.147316 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:29:04.120284 systemd-modules-load[188]: Inserted module 'overlay' Nov 8 00:29:04.227738 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:29:04.227764 kernel: Bridge firewalling registered Nov 8 00:29:04.166824 systemd-modules-load[188]: Inserted module 'br_netfilter' Nov 8 00:29:04.233213 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:29:04.233277 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:29:04.234887 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:29:04.236586 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:29:04.247440 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:29:04.250360 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:29:04.254296 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:29:04.265618 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:29:04.268105 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:29:04.272102 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:29:04.277246 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:29:04.279287 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:29:04.285851 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:29:04.294314 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:29:04.299836 dracut-cmdline[215]: dracut-dracut-053 Nov 8 00:29:04.303751 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:29:04.320238 systemd-resolved[223]: Positive Trust Anchors: Nov 8 00:29:04.321099 systemd-resolved[223]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:29:04.321132 systemd-resolved[223]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:29:04.331933 systemd-resolved[223]: Defaulting to hostname 'linux'. Nov 8 00:29:04.332756 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:29:04.333800 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:29:04.384263 kernel: SCSI subsystem initialized Nov 8 00:29:04.394212 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:29:04.405208 kernel: iscsi: registered transport (tcp) Nov 8 00:29:04.424300 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:29:04.424365 kernel: QLogic iSCSI HBA Driver Nov 8 00:29:04.472151 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:29:04.481520 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:29:04.530845 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:29:04.531060 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:29:04.536255 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:29:04.594251 kernel: raid6: avx2x4 gen() 12082 MB/s Nov 8 00:29:04.613246 kernel: raid6: avx2x2 gen() 14059 MB/s Nov 8 00:29:04.631258 kernel: raid6: avx2x1 gen() 18993 MB/s Nov 8 00:29:04.631306 kernel: raid6: using algorithm avx2x1 gen() 18993 MB/s Nov 8 00:29:04.651538 kernel: raid6: .... xor() 14009 MB/s, rmw enabled Nov 8 00:29:04.651595 kernel: raid6: using avx2x2 recovery algorithm Nov 8 00:29:04.695281 kernel: xor: automatically using best checksumming function avx Nov 8 00:29:04.888258 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:29:04.903474 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:29:04.910595 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:29:04.921597 systemd-udevd[404]: Using default interface naming scheme 'v255'. Nov 8 00:29:04.925618 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:29:04.940495 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:29:04.958543 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Nov 8 00:29:05.003101 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:29:05.010476 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:29:05.066811 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:29:05.075712 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:29:05.090011 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:29:05.092527 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:29:05.094278 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:29:05.095730 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:29:05.101340 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:29:05.116986 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:29:05.139214 kernel: scsi host0: Virtio SCSI HBA Nov 8 00:29:05.152197 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 8 00:29:05.159230 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:29:05.179233 kernel: libata version 3.00 loaded. Nov 8 00:29:05.180288 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:29:05.205877 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:29:05.213054 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:29:05.213696 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:29:05.213845 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:29:05.214510 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:29:05.236973 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:29:05.236995 kernel: AES CTR mode by8 optimization enabled Nov 8 00:29:05.223678 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:29:05.256708 kernel: ACPI: bus type USB registered Nov 8 00:29:05.256768 kernel: usbcore: registered new interface driver usbfs Nov 8 00:29:05.261288 kernel: usbcore: registered new interface driver hub Nov 8 00:29:05.264227 kernel: usbcore: registered new device driver usb Nov 8 00:29:05.275285 kernel: sd 0:0:0:0: Power-on or device reset occurred Nov 8 00:29:05.276220 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Nov 8 00:29:05.276376 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 8 00:29:05.276513 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Nov 8 00:29:05.277265 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 8 00:29:05.283496 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:29:05.283684 kernel: GPT:17805311 != 80003071 Nov 8 00:29:05.283734 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:29:05.283784 kernel: GPT:17805311 != 80003071 Nov 8 00:29:05.283833 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:29:05.283878 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:29:05.288213 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 8 00:29:05.292213 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Nov 8 00:29:05.292386 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Nov 8 00:29:05.300226 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Nov 8 00:29:05.301288 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Nov 8 00:29:05.301451 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Nov 8 00:29:05.301706 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Nov 8 00:29:05.301811 kernel: hub 1-0:1.0: USB hub found Nov 8 00:29:05.301955 kernel: hub 1-0:1.0: 4 ports detected Nov 8 00:29:05.302117 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Nov 8 00:29:05.302285 kernel: hub 2-0:1.0: USB hub found Nov 8 00:29:05.302425 kernel: hub 2-0:1.0: 4 ports detected Nov 8 00:29:05.311460 kernel: ahci 0000:00:1f.2: version 3.0 Nov 8 00:29:05.313600 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 8 00:29:05.315321 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 8 00:29:05.315483 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 8 00:29:05.329112 kernel: scsi host1: ahci Nov 8 00:29:05.331247 kernel: scsi host2: ahci Nov 8 00:29:05.331847 kernel: scsi host3: ahci Nov 8 00:29:05.333250 kernel: scsi host4: ahci Nov 8 00:29:05.333516 kernel: scsi host5: ahci Nov 8 00:29:05.345505 kernel: scsi host6: ahci Nov 8 00:29:05.345734 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 Nov 8 00:29:05.345746 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 Nov 8 00:29:05.345755 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 Nov 8 00:29:05.345765 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 Nov 8 00:29:05.345774 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 Nov 8 00:29:05.345783 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 Nov 8 00:29:05.354206 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (459) Nov 8 00:29:05.371596 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 8 00:29:05.481565 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (451) Nov 8 00:29:05.482466 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:29:05.488531 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Nov 8 00:29:05.489294 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 8 00:29:05.500326 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 8 00:29:05.505781 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 8 00:29:05.513535 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:29:05.538434 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:29:05.538485 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:29:05.538505 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Nov 8 00:29:05.517332 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:29:05.546440 disk-uuid[564]: Primary Header is updated. Nov 8 00:29:05.546440 disk-uuid[564]: Secondary Entries is updated. Nov 8 00:29:05.546440 disk-uuid[564]: Secondary Header is updated. Nov 8 00:29:05.556479 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:29:05.572877 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:29:05.656221 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 8 00:29:05.656290 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 8 00:29:05.660567 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 8 00:29:05.661203 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 8 00:29:05.670234 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 8 00:29:05.670324 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 8 00:29:05.671612 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 8 00:29:05.674447 kernel: ata1.00: applying bridge limits Nov 8 00:29:05.677420 kernel: ata1.00: configured for UDMA/100 Nov 8 00:29:05.695240 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 8 00:29:05.712233 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 8 00:29:05.746761 kernel: usbcore: registered new interface driver usbhid Nov 8 00:29:05.746830 kernel: usbhid: USB HID core driver Nov 8 00:29:05.755209 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Nov 8 00:29:05.761346 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Nov 8 00:29:05.771007 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 8 00:29:05.771306 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 8 00:29:05.780210 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Nov 8 00:29:06.548266 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:29:06.548359 disk-uuid[566]: The operation has completed successfully. Nov 8 00:29:06.612831 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:29:06.613011 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:29:06.639375 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:29:06.655908 sh[597]: Success Nov 8 00:29:06.680239 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 8 00:29:06.755947 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:29:06.768360 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:29:06.771885 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:29:06.806209 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:29:06.806311 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:29:06.811806 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:29:06.817976 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:29:06.822565 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:29:06.839250 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 8 00:29:06.841912 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:29:06.843587 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:29:06.848382 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:29:06.850641 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:29:06.881107 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:29:06.881181 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:29:06.881209 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:29:06.890467 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:29:06.890549 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:29:06.904362 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:29:06.909926 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:29:06.915034 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:29:06.925437 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:29:06.957955 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:29:06.969915 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:29:07.011294 systemd-networkd[778]: lo: Link UP Nov 8 00:29:07.012220 systemd-networkd[778]: lo: Gained carrier Nov 8 00:29:07.015372 systemd-networkd[778]: Enumeration completed Nov 8 00:29:07.016132 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:29:07.017754 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:29:07.017757 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:29:07.022601 ignition[723]: Ignition 2.19.0 Nov 8 00:29:07.020370 systemd[1]: Reached target network.target - Network. Nov 8 00:29:07.022608 ignition[723]: Stage: fetch-offline Nov 8 00:29:07.021053 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:29:07.022638 ignition[723]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:29:07.021055 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:29:07.022645 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:29:07.022649 systemd-networkd[778]: eth0: Link UP Nov 8 00:29:07.022731 ignition[723]: parsed url from cmdline: "" Nov 8 00:29:07.022652 systemd-networkd[778]: eth0: Gained carrier Nov 8 00:29:07.022734 ignition[723]: no config URL provided Nov 8 00:29:07.022658 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:29:07.022739 ignition[723]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:29:07.024357 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:29:07.022745 ignition[723]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:29:07.027447 systemd-networkd[778]: eth1: Link UP Nov 8 00:29:07.022749 ignition[723]: failed to fetch config: resource requires networking Nov 8 00:29:07.027449 systemd-networkd[778]: eth1: Gained carrier Nov 8 00:29:07.022912 ignition[723]: Ignition finished successfully Nov 8 00:29:07.027456 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:29:07.032580 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:29:07.044761 ignition[787]: Ignition 2.19.0 Nov 8 00:29:07.044774 ignition[787]: Stage: fetch Nov 8 00:29:07.044913 ignition[787]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:29:07.044920 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:29:07.044987 ignition[787]: parsed url from cmdline: "" Nov 8 00:29:07.044990 ignition[787]: no config URL provided Nov 8 00:29:07.045007 ignition[787]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:29:07.045013 ignition[787]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:29:07.045029 ignition[787]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Nov 8 00:29:07.045150 ignition[787]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 8 00:29:07.077323 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Nov 8 00:29:07.082243 systemd-networkd[778]: eth0: DHCPv4 address 46.62.239.97/32, gateway 172.31.1.1 acquired from 172.31.1.1 Nov 8 00:29:07.246155 ignition[787]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Nov 8 00:29:07.251107 ignition[787]: GET result: OK Nov 8 00:29:07.251248 ignition[787]: parsing config with SHA512: a0bf2b2ca1b3cfe58d7722c6eae92b70b9defc9623b9a1cfceca9aa9532432e9ad21ba9dfdd547e92e85b20464590179a312cc209a783973d9aa0104604713fc Nov 8 00:29:07.258782 unknown[787]: fetched base config from "system" Nov 8 00:29:07.258805 unknown[787]: fetched base config from "system" Nov 8 00:29:07.259940 ignition[787]: fetch: fetch complete Nov 8 00:29:07.258814 unknown[787]: fetched user config from "hetzner" Nov 8 00:29:07.259949 ignition[787]: fetch: fetch passed Nov 8 00:29:07.262645 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:29:07.260036 ignition[787]: Ignition finished successfully Nov 8 00:29:07.273448 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:29:07.293954 ignition[796]: Ignition 2.19.0 Nov 8 00:29:07.293973 ignition[796]: Stage: kargs Nov 8 00:29:07.296163 ignition[796]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:29:07.296182 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:29:07.300645 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:29:07.298967 ignition[796]: kargs: kargs passed Nov 8 00:29:07.299063 ignition[796]: Ignition finished successfully Nov 8 00:29:07.310476 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:29:07.333295 ignition[803]: Ignition 2.19.0 Nov 8 00:29:07.334816 ignition[803]: Stage: disks Nov 8 00:29:07.335148 ignition[803]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:29:07.340597 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:29:07.335164 ignition[803]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:29:07.347905 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:29:07.338865 ignition[803]: disks: disks passed Nov 8 00:29:07.350117 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:29:07.338939 ignition[803]: Ignition finished successfully Nov 8 00:29:07.352401 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:29:07.354681 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:29:07.356830 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:29:07.368501 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:29:07.392021 systemd-fsck[812]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 8 00:29:07.397132 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:29:07.404331 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:29:07.535209 kernel: EXT4-fs (sda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:29:07.535832 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:29:07.537889 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:29:07.551388 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:29:07.554385 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:29:07.556558 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 8 00:29:07.559080 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:29:07.559108 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:29:07.568259 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (820) Nov 8 00:29:07.569341 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:29:07.613636 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:29:07.613681 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:29:07.613702 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:29:07.613722 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:29:07.613742 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:29:07.612438 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:29:07.617146 coreos-metadata[822]: Nov 08 00:29:07.615 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Nov 8 00:29:07.617146 coreos-metadata[822]: Nov 08 00:29:07.617 INFO Fetch successful Nov 8 00:29:07.621180 coreos-metadata[822]: Nov 08 00:29:07.618 INFO wrote hostname ci-4081-3-6-n-dcea41702a to /sysroot/etc/hostname Nov 8 00:29:07.625399 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:29:07.627396 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:29:07.661023 initrd-setup-root[848]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:29:07.666301 initrd-setup-root[855]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:29:07.672304 initrd-setup-root[862]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:29:07.676808 initrd-setup-root[869]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:29:07.745398 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:29:07.750336 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:29:07.757926 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:29:07.769429 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:29:07.792782 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:29:07.796128 ignition[938]: INFO : Ignition 2.19.0 Nov 8 00:29:07.796128 ignition[938]: INFO : Stage: mount Nov 8 00:29:07.799655 ignition[938]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:29:07.799655 ignition[938]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:29:07.799655 ignition[938]: INFO : mount: mount passed Nov 8 00:29:07.799655 ignition[938]: INFO : Ignition finished successfully Nov 8 00:29:07.799507 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:29:07.800681 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:29:07.809313 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:29:07.815176 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:29:07.833501 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (948) Nov 8 00:29:07.838939 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:29:07.838974 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:29:07.842394 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:29:07.854482 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:29:07.854515 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:29:07.858599 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:29:07.880219 ignition[965]: INFO : Ignition 2.19.0 Nov 8 00:29:07.880219 ignition[965]: INFO : Stage: files Nov 8 00:29:07.880219 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:29:07.880219 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:29:07.885345 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:29:07.886416 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:29:07.886416 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:29:07.891261 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:29:07.891261 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:29:07.895491 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:29:07.895491 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:29:07.895491 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 8 00:29:07.891328 unknown[965]: wrote ssh authorized keys file for user: core Nov 8 00:29:08.091814 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:29:08.399300 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:29:08.402100 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:29:08.402100 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:29:08.402100 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:29:08.402100 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:29:08.402100 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:29:08.402100 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:29:08.402100 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:29:08.402100 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:29:08.402100 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:29:08.402100 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:29:08.402100 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:29:08.402100 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:29:08.402100 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:29:08.402100 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 8 00:29:08.815806 systemd-networkd[778]: eth0: Gained IPv6LL Nov 8 00:29:08.824066 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 00:29:08.944943 systemd-networkd[778]: eth1: Gained IPv6LL Nov 8 00:29:09.116844 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:29:09.116844 ignition[965]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 8 00:29:09.124735 ignition[965]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:29:09.124735 ignition[965]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:29:09.124735 ignition[965]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 8 00:29:09.124735 ignition[965]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 8 00:29:09.124735 ignition[965]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 8 00:29:09.124735 ignition[965]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 8 00:29:09.124735 ignition[965]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 8 00:29:09.124735 ignition[965]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:29:09.124735 ignition[965]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:29:09.124735 ignition[965]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:29:09.124735 ignition[965]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:29:09.124735 ignition[965]: INFO : files: files passed Nov 8 00:29:09.124735 ignition[965]: INFO : Ignition finished successfully Nov 8 00:29:09.121924 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:29:09.129598 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:29:09.134324 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:29:09.140053 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:29:09.156396 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:29:09.156396 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:29:09.140294 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:29:09.162832 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:29:09.155238 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:29:09.157493 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:29:09.165357 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:29:09.196961 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:29:09.197156 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:29:09.199427 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:29:09.201314 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:29:09.203304 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:29:09.210419 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:29:09.226556 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:29:09.233345 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:29:09.243253 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:29:09.244809 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:29:09.245608 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:29:09.246421 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:29:09.246522 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:29:09.248646 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:29:09.249702 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:29:09.251181 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:29:09.252826 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:29:09.254300 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:29:09.255779 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:29:09.257389 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:29:09.258981 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:29:09.261286 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:29:09.263653 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:29:09.265672 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:29:09.265804 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:29:09.268567 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:29:09.269856 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:29:09.271715 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:29:09.272478 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:29:09.274446 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:29:09.274612 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:29:09.277078 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:29:09.277270 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:29:09.279558 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:29:09.279733 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:29:09.281381 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 8 00:29:09.281562 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:29:09.289719 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:29:09.291680 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:29:09.293419 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:29:09.304021 ignition[1018]: INFO : Ignition 2.19.0 Nov 8 00:29:09.304021 ignition[1018]: INFO : Stage: umount Nov 8 00:29:09.309341 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:29:09.309341 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:29:09.309341 ignition[1018]: INFO : umount: umount passed Nov 8 00:29:09.309341 ignition[1018]: INFO : Ignition finished successfully Nov 8 00:29:09.306631 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:29:09.308174 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:29:09.308384 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:29:09.310388 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:29:09.310507 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:29:09.315549 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:29:09.318231 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:29:09.323303 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:29:09.323378 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:29:09.330667 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:29:09.331519 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:29:09.331583 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:29:09.334108 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:29:09.334176 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:29:09.335307 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:29:09.335341 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:29:09.336667 systemd[1]: Stopped target network.target - Network. Nov 8 00:29:09.337933 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:29:09.337975 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:29:09.339302 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:29:09.340554 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:29:09.345248 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:29:09.346471 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:29:09.348029 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:29:09.349399 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:29:09.349442 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:29:09.350717 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:29:09.350757 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:29:09.352008 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:29:09.352055 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:29:09.353312 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:29:09.353348 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:29:09.354780 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:29:09.356039 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:29:09.357569 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:29:09.357647 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:29:09.358247 systemd-networkd[778]: eth1: DHCPv6 lease lost Nov 8 00:29:09.359366 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:29:09.359424 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:29:09.362240 systemd-networkd[778]: eth0: DHCPv6 lease lost Nov 8 00:29:09.363525 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:29:09.363613 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:29:09.365128 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:29:09.365167 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:29:09.371381 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:29:09.372036 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:29:09.372089 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:29:09.373738 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:29:09.379115 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:29:09.379239 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:29:09.388588 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:29:09.388739 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:29:09.390720 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:29:09.391152 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:29:09.394774 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:29:09.394848 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:29:09.396133 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:29:09.396170 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:29:09.397714 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:29:09.397776 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:29:09.399916 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:29:09.399962 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:29:09.401531 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:29:09.401571 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:29:09.410477 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:29:09.412082 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:29:09.412162 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:29:09.415430 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:29:09.415511 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:29:09.416212 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:29:09.416261 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:29:09.416939 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:29:09.417007 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:29:09.420356 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:29:09.420418 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:29:09.422032 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:29:09.422144 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:29:09.423961 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:29:09.430405 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:29:09.439789 systemd[1]: Switching root. Nov 8 00:29:09.505728 systemd-journald[187]: Journal stopped Nov 8 00:29:10.601667 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Nov 8 00:29:10.601722 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:29:10.601738 kernel: SELinux: policy capability open_perms=1 Nov 8 00:29:10.601747 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:29:10.601756 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:29:10.601764 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:29:10.601773 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:29:10.601783 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:29:10.601795 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:29:10.601803 kernel: audit: type=1403 audit(1762561749.704:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:29:10.601813 systemd[1]: Successfully loaded SELinux policy in 67.655ms. Nov 8 00:29:10.601838 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.362ms. Nov 8 00:29:10.601849 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:29:10.601858 systemd[1]: Detected virtualization kvm. Nov 8 00:29:10.601868 systemd[1]: Detected architecture x86-64. Nov 8 00:29:10.601879 systemd[1]: Detected first boot. Nov 8 00:29:10.601889 systemd[1]: Hostname set to . Nov 8 00:29:10.601899 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:29:10.601908 zram_generator::config[1060]: No configuration found. Nov 8 00:29:10.601923 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:29:10.601932 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 8 00:29:10.601941 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 8 00:29:10.601951 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 8 00:29:10.601962 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:29:10.601972 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:29:10.601993 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:29:10.602003 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:29:10.602012 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:29:10.602023 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:29:10.602032 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:29:10.602043 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:29:10.602052 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:29:10.602064 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:29:10.602074 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:29:10.602083 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:29:10.602093 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:29:10.602104 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:29:10.602113 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 8 00:29:10.602122 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:29:10.602132 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 8 00:29:10.602143 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 8 00:29:10.602153 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 8 00:29:10.602163 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:29:10.602172 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:29:10.602222 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:29:10.602238 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:29:10.602251 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:29:10.602266 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:29:10.602280 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:29:10.602292 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:29:10.602305 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:29:10.602318 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:29:10.602330 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:29:10.602345 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:29:10.602357 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:29:10.602369 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:29:10.602384 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:29:10.602397 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:29:10.602410 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:29:10.602422 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:29:10.602435 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:29:10.602451 systemd[1]: Reached target machines.target - Containers. Nov 8 00:29:10.602466 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:29:10.602479 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:29:10.602492 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:29:10.602505 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:29:10.602516 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:29:10.602530 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:29:10.602543 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:29:10.602556 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:29:10.602570 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:29:10.602583 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:29:10.602596 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 8 00:29:10.602608 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 8 00:29:10.602623 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 8 00:29:10.602635 systemd[1]: Stopped systemd-fsck-usr.service. Nov 8 00:29:10.602648 kernel: loop: module loaded Nov 8 00:29:10.602660 kernel: fuse: init (API version 7.39) Nov 8 00:29:10.602674 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:29:10.602921 systemd-journald[1150]: Collecting audit messages is disabled. Nov 8 00:29:10.602949 kernel: ACPI: bus type drm_connector registered Nov 8 00:29:10.602963 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:29:10.602976 systemd-journald[1150]: Journal started Nov 8 00:29:10.603090 systemd-journald[1150]: Runtime Journal (/run/log/journal/90e24579ab95417ea2567db72145614b) is 4.8M, max 38.4M, 33.6M free. Nov 8 00:29:10.262979 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:29:10.284436 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 8 00:29:10.284797 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 8 00:29:10.610298 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:29:10.616261 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:29:10.627041 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:29:10.627088 systemd[1]: verity-setup.service: Deactivated successfully. Nov 8 00:29:10.627101 systemd[1]: Stopped verity-setup.service. Nov 8 00:29:10.633439 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:29:10.644042 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:29:10.636407 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:29:10.637110 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:29:10.637819 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:29:10.644475 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:29:10.645277 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:29:10.646042 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:29:10.646895 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:29:10.647863 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:29:10.648882 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:29:10.649060 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:29:10.650008 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:29:10.650162 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:29:10.651044 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:29:10.651217 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:29:10.652071 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:29:10.652293 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:29:10.653194 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:29:10.653362 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:29:10.654320 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:29:10.654478 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:29:10.655450 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:29:10.656359 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:29:10.657306 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:29:10.665734 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:29:10.672303 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:29:10.678771 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:29:10.679921 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:29:10.679952 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:29:10.681571 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:29:10.686276 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:29:10.692106 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:29:10.693316 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:29:10.699312 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:29:10.703337 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:29:10.704705 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:29:10.710772 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:29:10.712636 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:29:10.719134 systemd-journald[1150]: Time spent on flushing to /var/log/journal/90e24579ab95417ea2567db72145614b is 82.994ms for 1124 entries. Nov 8 00:29:10.719134 systemd-journald[1150]: System Journal (/var/log/journal/90e24579ab95417ea2567db72145614b) is 8.0M, max 584.8M, 576.8M free. Nov 8 00:29:10.850272 systemd-journald[1150]: Received client request to flush runtime journal. Nov 8 00:29:10.850320 kernel: loop0: detected capacity change from 0 to 140768 Nov 8 00:29:10.850339 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:29:10.850352 kernel: loop1: detected capacity change from 0 to 8 Nov 8 00:29:10.722425 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:29:10.724764 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:29:10.742649 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:29:10.749710 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:29:10.750714 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:29:10.752312 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:29:10.754019 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:29:10.755462 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:29:10.764514 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:29:10.773341 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:29:10.776535 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:29:10.820746 udevadm[1190]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 8 00:29:10.836610 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:29:10.853486 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:29:10.857570 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:29:10.858464 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:29:10.864405 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:29:10.872707 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:29:10.876228 kernel: loop2: detected capacity change from 0 to 142488 Nov 8 00:29:10.897389 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Nov 8 00:29:10.897409 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Nov 8 00:29:10.905746 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:29:10.930421 kernel: loop3: detected capacity change from 0 to 229808 Nov 8 00:29:10.976687 kernel: loop4: detected capacity change from 0 to 140768 Nov 8 00:29:11.000223 kernel: loop5: detected capacity change from 0 to 8 Nov 8 00:29:11.006244 kernel: loop6: detected capacity change from 0 to 142488 Nov 8 00:29:11.035228 kernel: loop7: detected capacity change from 0 to 229808 Nov 8 00:29:11.072477 (sd-merge)[1207]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Nov 8 00:29:11.073422 (sd-merge)[1207]: Merged extensions into '/usr'. Nov 8 00:29:11.079250 systemd[1]: Reloading requested from client PID 1180 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:29:11.079341 systemd[1]: Reloading... Nov 8 00:29:11.153232 zram_generator::config[1232]: No configuration found. Nov 8 00:29:11.259855 ldconfig[1175]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:29:11.310787 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:29:11.375378 systemd[1]: Reloading finished in 295 ms. Nov 8 00:29:11.400545 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:29:11.402056 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:29:11.404771 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:29:11.412353 systemd[1]: Starting ensure-sysext.service... Nov 8 00:29:11.415325 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:29:11.423311 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:29:11.430305 systemd[1]: Reloading requested from client PID 1277 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:29:11.430320 systemd[1]: Reloading... Nov 8 00:29:11.443563 systemd-udevd[1279]: Using default interface naming scheme 'v255'. Nov 8 00:29:11.445576 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:29:11.446119 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:29:11.447223 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:29:11.447470 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. Nov 8 00:29:11.447523 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. Nov 8 00:29:11.451570 systemd-tmpfiles[1278]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:29:11.451579 systemd-tmpfiles[1278]: Skipping /boot Nov 8 00:29:11.457943 systemd-tmpfiles[1278]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:29:11.458028 systemd-tmpfiles[1278]: Skipping /boot Nov 8 00:29:11.514204 zram_generator::config[1316]: No configuration found. Nov 8 00:29:11.611086 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:29:11.653661 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 8 00:29:11.666226 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:29:11.671209 kernel: ACPI: button: Power Button [PWRF] Nov 8 00:29:11.687205 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1328) Nov 8 00:29:11.689132 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 8 00:29:11.689474 systemd[1]: Reloading finished in 258 ms. Nov 8 00:29:11.703301 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:29:11.704645 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:29:11.727937 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Nov 8 00:29:11.744748 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:29:11.748382 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:29:11.752399 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:29:11.753656 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:29:11.762411 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:29:11.765447 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:29:11.767344 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:29:11.769374 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:29:11.773949 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:29:11.792223 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Nov 8 00:29:11.792300 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 8 00:29:11.792482 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 8 00:29:11.792672 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 8 00:29:11.784339 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:29:11.798208 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Nov 8 00:29:11.798244 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Nov 8 00:29:11.793265 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:29:11.812741 kernel: Console: switching to colour dummy device 80x25 Nov 8 00:29:11.814447 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 8 00:29:11.814484 kernel: [drm] features: -context_init Nov 8 00:29:11.817651 kernel: [drm] number of scanouts: 1 Nov 8 00:29:11.817685 kernel: [drm] number of cap sets: 0 Nov 8 00:29:11.823243 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Nov 8 00:29:11.832877 kernel: EDAC MC: Ver: 3.0.0 Nov 8 00:29:11.832951 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 8 00:29:11.832965 kernel: Console: switching to colour frame buffer device 160x50 Nov 8 00:29:11.828912 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:29:11.829885 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:29:11.831169 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:29:11.832358 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:29:11.845278 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 8 00:29:11.850090 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:29:11.850316 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:29:11.852808 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:29:11.853749 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:29:11.858654 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 8 00:29:11.879790 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:29:11.880054 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:29:11.887079 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:29:11.893393 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:29:11.897067 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:29:11.907491 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:29:11.907738 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:29:11.913256 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:29:11.917753 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:29:11.917833 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:29:11.919815 systemd[1]: Finished ensure-sysext.service. Nov 8 00:29:11.921270 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:29:11.921374 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:29:11.921718 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:29:11.921817 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:29:11.922163 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:29:11.922272 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:29:11.927291 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:29:11.932168 augenrules[1421]: No rules Nov 8 00:29:11.933413 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:29:11.938234 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:29:11.939627 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:29:11.939730 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:29:11.946376 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:29:11.951718 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:29:11.952350 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:29:11.959450 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 8 00:29:11.963382 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:29:11.965995 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:29:11.966930 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:29:11.978564 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:29:11.985417 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:29:12.000434 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:29:12.000602 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:29:12.014386 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:29:12.016615 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:29:12.046289 systemd-networkd[1395]: lo: Link UP Nov 8 00:29:12.046299 systemd-networkd[1395]: lo: Gained carrier Nov 8 00:29:12.049407 systemd-networkd[1395]: Enumeration completed Nov 8 00:29:12.049492 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:29:12.054584 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:29:12.054594 systemd-networkd[1395]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:29:12.060758 systemd-networkd[1395]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:29:12.060766 systemd-networkd[1395]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:29:12.062609 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:29:12.066467 systemd-networkd[1395]: eth0: Link UP Nov 8 00:29:12.066478 systemd-networkd[1395]: eth0: Gained carrier Nov 8 00:29:12.066508 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:29:12.068831 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:29:12.075205 systemd-networkd[1395]: eth1: Link UP Nov 8 00:29:12.075706 systemd-networkd[1395]: eth1: Gained carrier Nov 8 00:29:12.075791 systemd-networkd[1395]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:29:12.076555 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:29:12.108413 systemd-resolved[1396]: Positive Trust Anchors: Nov 8 00:29:12.108431 systemd-resolved[1396]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:29:12.108461 systemd-resolved[1396]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:29:12.109799 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 8 00:29:12.114347 lvm[1454]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:29:12.113724 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:29:12.120941 systemd-resolved[1396]: Using system hostname 'ci-4081-3-6-n-dcea41702a'. Nov 8 00:29:12.124575 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:29:12.126483 systemd[1]: Reached target network.target - Network. Nov 8 00:29:12.127113 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:29:12.130427 systemd-networkd[1395]: eth0: DHCPv4 address 46.62.239.97/32, gateway 172.31.1.1 acquired from 172.31.1.1 Nov 8 00:29:12.132612 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:29:12.134350 systemd-networkd[1395]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Nov 8 00:29:12.135302 systemd-timesyncd[1434]: Network configuration changed, trying to establish connection. Nov 8 00:29:12.146753 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:29:12.147759 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:29:12.148758 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:29:12.150403 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:29:12.150894 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:29:12.153171 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:29:12.153771 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:29:12.154417 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:29:12.154952 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:29:12.155082 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:29:12.155598 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:29:12.158229 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:29:12.160627 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:29:12.166747 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:29:12.168994 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:29:12.169940 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:29:12.170461 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:29:12.170846 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:29:12.173337 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:29:12.173656 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:29:12.176280 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:29:12.180746 lvm[1462]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:29:12.188466 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 8 00:29:12.195259 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:29:12.199315 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:29:12.204490 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:29:12.206014 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:29:12.217341 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:29:12.223009 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:29:12.228225 jq[1468]: false Nov 8 00:29:12.228333 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Nov 8 00:29:12.234201 coreos-metadata[1464]: Nov 08 00:29:12.234 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Nov 8 00:29:12.234664 dbus-daemon[1467]: [system] SELinux support is enabled Nov 8 00:29:12.239352 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:29:12.240244 coreos-metadata[1464]: Nov 08 00:29:12.240 INFO Fetch successful Nov 8 00:29:12.240310 coreos-metadata[1464]: Nov 08 00:29:12.240 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Nov 8 00:29:12.240379 coreos-metadata[1464]: Nov 08 00:29:12.240 INFO Fetch successful Nov 8 00:29:12.243409 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:29:12.254368 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:29:12.256522 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:29:12.257035 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:29:12.260320 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:29:12.262319 extend-filesystems[1469]: Found loop4 Nov 8 00:29:12.267235 extend-filesystems[1469]: Found loop5 Nov 8 00:29:12.267235 extend-filesystems[1469]: Found loop6 Nov 8 00:29:12.267235 extend-filesystems[1469]: Found loop7 Nov 8 00:29:12.267235 extend-filesystems[1469]: Found sda Nov 8 00:29:12.267235 extend-filesystems[1469]: Found sda1 Nov 8 00:29:12.267235 extend-filesystems[1469]: Found sda2 Nov 8 00:29:12.267235 extend-filesystems[1469]: Found sda3 Nov 8 00:29:12.267235 extend-filesystems[1469]: Found usr Nov 8 00:29:12.267235 extend-filesystems[1469]: Found sda4 Nov 8 00:29:12.267235 extend-filesystems[1469]: Found sda6 Nov 8 00:29:12.267235 extend-filesystems[1469]: Found sda7 Nov 8 00:29:12.267235 extend-filesystems[1469]: Found sda9 Nov 8 00:29:12.267235 extend-filesystems[1469]: Checking size of /dev/sda9 Nov 8 00:29:12.264306 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:29:12.267348 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:29:12.280332 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:29:12.295545 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:29:12.295697 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:29:12.295940 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:29:12.296079 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:29:12.322427 jq[1480]: true Nov 8 00:29:12.325286 extend-filesystems[1469]: Resized partition /dev/sda9 Nov 8 00:29:12.308740 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:29:12.327681 extend-filesystems[1496]: resize2fs 1.47.1 (20-May-2024) Nov 8 00:29:12.310235 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:29:13.264191 systemd-resolved[1396]: Clock change detected. Flushing caches. Nov 8 00:29:13.264388 systemd-timesyncd[1434]: Contacted time server 78.47.56.71:123 (0.flatcar.pool.ntp.org). Nov 8 00:29:13.264436 systemd-timesyncd[1434]: Initial clock synchronization to Sat 2025-11-08 00:29:13.263514 UTC. Nov 8 00:29:13.267432 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Nov 8 00:29:13.267575 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:29:13.267620 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:29:13.268235 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:29:13.268252 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:29:13.280473 (ntainerd)[1513]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:29:13.295427 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1293) Nov 8 00:29:13.298227 update_engine[1478]: I20251108 00:29:13.298153 1478 main.cc:92] Flatcar Update Engine starting Nov 8 00:29:13.300305 tar[1494]: linux-amd64/LICENSE Nov 8 00:29:13.300517 tar[1494]: linux-amd64/helm Nov 8 00:29:13.302710 jq[1499]: true Nov 8 00:29:13.311559 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:29:13.316631 update_engine[1478]: I20251108 00:29:13.316567 1478 update_check_scheduler.cc:74] Next update check in 11m19s Nov 8 00:29:13.321597 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:29:13.355698 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 8 00:29:13.356507 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:29:13.364051 systemd-logind[1477]: New seat seat0. Nov 8 00:29:13.378887 systemd-logind[1477]: Watching system buttons on /dev/input/event2 (Power Button) Nov 8 00:29:13.378906 systemd-logind[1477]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 8 00:29:13.379885 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:29:13.489810 sshd_keygen[1512]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:29:13.494003 locksmithd[1519]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:29:13.509185 bash[1535]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:29:13.512661 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:29:13.519675 systemd[1]: Starting sshkeys.service... Nov 8 00:29:13.523930 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:29:13.538681 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:29:13.546876 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 8 00:29:13.555100 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 8 00:29:13.572965 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:29:13.573154 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:29:13.586759 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:29:13.591771 coreos-metadata[1556]: Nov 08 00:29:13.591 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Nov 8 00:29:13.592752 containerd[1513]: time="2025-11-08T00:29:13.592661435Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:29:13.594606 coreos-metadata[1556]: Nov 08 00:29:13.594 INFO Fetch successful Nov 8 00:29:13.602961 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:29:13.608441 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Nov 8 00:29:13.613800 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:29:13.619771 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 8 00:29:13.640257 containerd[1513]: time="2025-11-08T00:29:13.628486213Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:29:13.640257 containerd[1513]: time="2025-11-08T00:29:13.629701002Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:29:13.640257 containerd[1513]: time="2025-11-08T00:29:13.629724165Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:29:13.640257 containerd[1513]: time="2025-11-08T00:29:13.629737891Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:29:13.620656 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:29:13.641356 containerd[1513]: time="2025-11-08T00:29:13.641331100Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:29:13.641385 containerd[1513]: time="2025-11-08T00:29:13.641355837Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:29:13.641509 containerd[1513]: time="2025-11-08T00:29:13.641433021Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:29:13.641509 containerd[1513]: time="2025-11-08T00:29:13.641447358Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:29:13.641489 unknown[1556]: wrote ssh authorized keys file for user: core Nov 8 00:29:13.642143 containerd[1513]: time="2025-11-08T00:29:13.642119739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:29:13.642143 containerd[1513]: time="2025-11-08T00:29:13.642139105Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:29:13.642238 containerd[1513]: time="2025-11-08T00:29:13.642151609Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:29:13.642238 containerd[1513]: time="2025-11-08T00:29:13.642159834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:29:13.644769 extend-filesystems[1496]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 8 00:29:13.644769 extend-filesystems[1496]: old_desc_blocks = 1, new_desc_blocks = 5 Nov 8 00:29:13.644769 extend-filesystems[1496]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Nov 8 00:29:13.651869 extend-filesystems[1469]: Resized filesystem in /dev/sda9 Nov 8 00:29:13.651869 extend-filesystems[1469]: Found sr0 Nov 8 00:29:13.654399 containerd[1513]: time="2025-11-08T00:29:13.648039570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:29:13.654399 containerd[1513]: time="2025-11-08T00:29:13.649602831Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:29:13.654399 containerd[1513]: time="2025-11-08T00:29:13.649807996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:29:13.654399 containerd[1513]: time="2025-11-08T00:29:13.650507928Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:29:13.654399 containerd[1513]: time="2025-11-08T00:29:13.650610300Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:29:13.654399 containerd[1513]: time="2025-11-08T00:29:13.652020034Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:29:13.646479 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:29:13.646666 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:29:13.659465 containerd[1513]: time="2025-11-08T00:29:13.659225446Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:29:13.660679 containerd[1513]: time="2025-11-08T00:29:13.660489325Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:29:13.660679 containerd[1513]: time="2025-11-08T00:29:13.660520374Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:29:13.660679 containerd[1513]: time="2025-11-08T00:29:13.660536073Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:29:13.660679 containerd[1513]: time="2025-11-08T00:29:13.660549138Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:29:13.660763 containerd[1513]: time="2025-11-08T00:29:13.660679713Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:29:13.660914 containerd[1513]: time="2025-11-08T00:29:13.660895567Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:29:13.660988 containerd[1513]: time="2025-11-08T00:29:13.660973032Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:29:13.661009 containerd[1513]: time="2025-11-08T00:29:13.660990525Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:29:13.661009 containerd[1513]: time="2025-11-08T00:29:13.661001977Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:29:13.661039 containerd[1513]: time="2025-11-08T00:29:13.661013298Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:29:13.661039 containerd[1513]: time="2025-11-08T00:29:13.661023928Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:29:13.661039 containerd[1513]: time="2025-11-08T00:29:13.661034037Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:29:13.661087 containerd[1513]: time="2025-11-08T00:29:13.661046300Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:29:13.661087 containerd[1513]: time="2025-11-08T00:29:13.661060066Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:29:13.661087 containerd[1513]: time="2025-11-08T00:29:13.661073231Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:29:13.661135 containerd[1513]: time="2025-11-08T00:29:13.661084031Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:29:13.661135 containerd[1513]: time="2025-11-08T00:29:13.661094731Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:29:13.661135 containerd[1513]: time="2025-11-08T00:29:13.661112504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:29:13.661135 containerd[1513]: time="2025-11-08T00:29:13.661124006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:29:13.661196 containerd[1513]: time="2025-11-08T00:29:13.661134435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:29:13.661196 containerd[1513]: time="2025-11-08T00:29:13.661145836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:29:13.661196 containerd[1513]: time="2025-11-08T00:29:13.661155795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:29:13.661196 containerd[1513]: time="2025-11-08T00:29:13.661166395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:29:13.661196 containerd[1513]: time="2025-11-08T00:29:13.661175943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:29:13.661196 containerd[1513]: time="2025-11-08T00:29:13.661186032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:29:13.661306 containerd[1513]: time="2025-11-08T00:29:13.661215578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:29:13.661306 containerd[1513]: time="2025-11-08T00:29:13.661229273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:29:13.661306 containerd[1513]: time="2025-11-08T00:29:13.661244312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:29:13.661306 containerd[1513]: time="2025-11-08T00:29:13.661254440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:29:13.661306 containerd[1513]: time="2025-11-08T00:29:13.661264759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:29:13.661306 containerd[1513]: time="2025-11-08T00:29:13.661277574Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:29:13.661306 containerd[1513]: time="2025-11-08T00:29:13.661293524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:29:13.661306 containerd[1513]: time="2025-11-08T00:29:13.661303372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:29:13.661442 containerd[1513]: time="2025-11-08T00:29:13.661312780Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:29:13.661442 containerd[1513]: time="2025-11-08T00:29:13.661349178Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:29:13.661442 containerd[1513]: time="2025-11-08T00:29:13.661363936Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:29:13.661442 containerd[1513]: time="2025-11-08T00:29:13.661373153Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:29:13.661442 containerd[1513]: time="2025-11-08T00:29:13.661383061Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:29:13.661442 containerd[1513]: time="2025-11-08T00:29:13.661390996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:29:13.661442 containerd[1513]: time="2025-11-08T00:29:13.661401296Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:29:13.662984 containerd[1513]: time="2025-11-08T00:29:13.662960579Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:29:13.662984 containerd[1513]: time="2025-11-08T00:29:13.662982721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:29:13.663896 containerd[1513]: time="2025-11-08T00:29:13.663843866Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:29:13.663995 containerd[1513]: time="2025-11-08T00:29:13.663941599Z" level=info msg="Connect containerd service" Nov 8 00:29:13.663995 containerd[1513]: time="2025-11-08T00:29:13.663971225Z" level=info msg="using legacy CRI server" Nov 8 00:29:13.663995 containerd[1513]: time="2025-11-08T00:29:13.663976865Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:29:13.664344 containerd[1513]: time="2025-11-08T00:29:13.664315500Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:29:13.667491 containerd[1513]: time="2025-11-08T00:29:13.667172007Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:29:13.668122 containerd[1513]: time="2025-11-08T00:29:13.667885385Z" level=info msg="Start subscribing containerd event" Nov 8 00:29:13.668122 containerd[1513]: time="2025-11-08T00:29:13.667924598Z" level=info msg="Start recovering state" Nov 8 00:29:13.668122 containerd[1513]: time="2025-11-08T00:29:13.667969683Z" level=info msg="Start event monitor" Nov 8 00:29:13.668122 containerd[1513]: time="2025-11-08T00:29:13.667981715Z" level=info msg="Start snapshots syncer" Nov 8 00:29:13.668122 containerd[1513]: time="2025-11-08T00:29:13.667988318Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:29:13.668122 containerd[1513]: time="2025-11-08T00:29:13.667994610Z" level=info msg="Start streaming server" Nov 8 00:29:13.668390 containerd[1513]: time="2025-11-08T00:29:13.668377067Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:29:13.668476 containerd[1513]: time="2025-11-08T00:29:13.668465974Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:29:13.668550 containerd[1513]: time="2025-11-08T00:29:13.668541345Z" level=info msg="containerd successfully booted in 0.082435s" Nov 8 00:29:13.668597 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:29:13.676600 update-ssh-keys[1572]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:29:13.677150 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 8 00:29:13.682627 systemd[1]: Finished sshkeys.service. Nov 8 00:29:13.973737 tar[1494]: linux-amd64/README.md Nov 8 00:29:13.983029 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:29:14.028689 systemd-networkd[1395]: eth0: Gained IPv6LL Nov 8 00:29:14.032683 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:29:14.035665 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:29:14.046773 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:29:14.052862 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:29:14.089566 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:29:14.540579 systemd-networkd[1395]: eth1: Gained IPv6LL Nov 8 00:29:15.364611 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:29:15.368230 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:29:15.372364 systemd[1]: Startup finished in 1.891s (kernel) + 5.903s (initrd) + 4.808s (userspace) = 12.602s. Nov 8 00:29:15.378168 (kubelet)[1595]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:29:16.315607 kubelet[1595]: E1108 00:29:16.315523 1595 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:29:16.319493 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:29:16.319669 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:29:16.319972 systemd[1]: kubelet.service: Consumed 1.573s CPU time. Nov 8 00:29:26.437991 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:29:26.445990 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:29:26.579066 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:29:26.583732 (kubelet)[1614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:29:26.623765 kubelet[1614]: E1108 00:29:26.623697 1614 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:29:26.627344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:29:26.627551 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:29:36.687267 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 8 00:29:36.693760 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:29:36.800986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:29:36.804598 (kubelet)[1629]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:29:36.842390 kubelet[1629]: E1108 00:29:36.842317 1629 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:29:36.845599 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:29:36.845732 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:29:46.937554 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 8 00:29:46.942669 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:29:47.105730 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:29:47.117746 (kubelet)[1644]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:29:47.168879 kubelet[1644]: E1108 00:29:47.168807 1644 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:29:47.171893 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:29:47.172206 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:29:49.980120 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:29:49.985853 systemd[1]: Started sshd@0-46.62.239.97:22-147.75.109.163:55722.service - OpenSSH per-connection server daemon (147.75.109.163:55722). Nov 8 00:29:51.119091 sshd[1652]: Accepted publickey for core from 147.75.109.163 port 55722 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:29:51.122445 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:51.136823 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:29:51.142867 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:29:51.146753 systemd-logind[1477]: New session 1 of user core. Nov 8 00:29:51.174740 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:29:51.182782 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:29:51.199399 (systemd)[1656]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:29:51.352970 systemd[1656]: Queued start job for default target default.target. Nov 8 00:29:51.363253 systemd[1656]: Created slice app.slice - User Application Slice. Nov 8 00:29:51.363274 systemd[1656]: Reached target paths.target - Paths. Nov 8 00:29:51.363285 systemd[1656]: Reached target timers.target - Timers. Nov 8 00:29:51.364472 systemd[1656]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:29:51.388635 systemd[1656]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:29:51.388800 systemd[1656]: Reached target sockets.target - Sockets. Nov 8 00:29:51.388825 systemd[1656]: Reached target basic.target - Basic System. Nov 8 00:29:51.389352 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:29:51.389533 systemd[1656]: Reached target default.target - Main User Target. Nov 8 00:29:51.389595 systemd[1656]: Startup finished in 181ms. Nov 8 00:29:51.397648 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:29:52.152055 systemd[1]: Started sshd@1-46.62.239.97:22-147.75.109.163:45212.service - OpenSSH per-connection server daemon (147.75.109.163:45212). Nov 8 00:29:53.158401 sshd[1667]: Accepted publickey for core from 147.75.109.163 port 45212 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:29:53.160621 sshd[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:53.169517 systemd-logind[1477]: New session 2 of user core. Nov 8 00:29:53.178640 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:29:53.858759 sshd[1667]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:53.863109 systemd[1]: sshd@1-46.62.239.97:22-147.75.109.163:45212.service: Deactivated successfully. Nov 8 00:29:53.865365 systemd[1]: session-2.scope: Deactivated successfully. Nov 8 00:29:53.866313 systemd-logind[1477]: Session 2 logged out. Waiting for processes to exit. Nov 8 00:29:53.867776 systemd-logind[1477]: Removed session 2. Nov 8 00:29:54.039954 systemd[1]: Started sshd@2-46.62.239.97:22-147.75.109.163:45218.service - OpenSSH per-connection server daemon (147.75.109.163:45218). Nov 8 00:29:55.053687 sshd[1674]: Accepted publickey for core from 147.75.109.163 port 45218 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:29:55.055986 sshd[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:55.066263 systemd-logind[1477]: New session 3 of user core. Nov 8 00:29:55.083711 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:29:55.743143 sshd[1674]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:55.746788 systemd[1]: sshd@2-46.62.239.97:22-147.75.109.163:45218.service: Deactivated successfully. Nov 8 00:29:55.748741 systemd[1]: session-3.scope: Deactivated successfully. Nov 8 00:29:55.749954 systemd-logind[1477]: Session 3 logged out. Waiting for processes to exit. Nov 8 00:29:55.751564 systemd-logind[1477]: Removed session 3. Nov 8 00:29:55.955909 systemd[1]: Started sshd@3-46.62.239.97:22-147.75.109.163:45228.service - OpenSSH per-connection server daemon (147.75.109.163:45228). Nov 8 00:29:57.083400 sshd[1681]: Accepted publickey for core from 147.75.109.163 port 45228 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:29:57.085137 sshd[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:57.090702 systemd-logind[1477]: New session 4 of user core. Nov 8 00:29:57.096605 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:29:57.187327 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 8 00:29:57.193722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:29:57.317568 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:29:57.328761 (kubelet)[1692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:29:57.371016 kubelet[1692]: E1108 00:29:57.370860 1692 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:29:57.374204 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:29:57.374381 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:29:57.845147 sshd[1681]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:57.848084 systemd[1]: sshd@3-46.62.239.97:22-147.75.109.163:45228.service: Deactivated successfully. Nov 8 00:29:57.850095 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:29:57.851200 systemd-logind[1477]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:29:57.852546 systemd-logind[1477]: Removed session 4. Nov 8 00:29:58.003894 systemd[1]: Started sshd@4-46.62.239.97:22-147.75.109.163:45232.service - OpenSSH per-connection server daemon (147.75.109.163:45232). Nov 8 00:29:58.771650 update_engine[1478]: I20251108 00:29:58.771451 1478 update_attempter.cc:509] Updating boot flags... Nov 8 00:29:58.833919 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1714) Nov 8 00:29:58.906469 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1710) Nov 8 00:29:59.013212 sshd[1703]: Accepted publickey for core from 147.75.109.163 port 45232 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:29:59.015259 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:59.021060 systemd-logind[1477]: New session 5 of user core. Nov 8 00:29:59.029611 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:29:59.559975 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:29:59.560455 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:29:59.581060 sudo[1724]: pam_unix(sudo:session): session closed for user root Nov 8 00:29:59.743298 sshd[1703]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:59.747769 systemd[1]: sshd@4-46.62.239.97:22-147.75.109.163:45232.service: Deactivated successfully. Nov 8 00:29:59.750120 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:29:59.751239 systemd-logind[1477]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:29:59.752704 systemd-logind[1477]: Removed session 5. Nov 8 00:29:59.924750 systemd[1]: Started sshd@5-46.62.239.97:22-147.75.109.163:45238.service - OpenSSH per-connection server daemon (147.75.109.163:45238). Nov 8 00:30:00.933389 sshd[1729]: Accepted publickey for core from 147.75.109.163 port 45238 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:30:00.935674 sshd[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:00.942927 systemd-logind[1477]: New session 6 of user core. Nov 8 00:30:00.952610 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:30:01.474287 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:30:01.474780 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:30:01.480066 sudo[1733]: pam_unix(sudo:session): session closed for user root Nov 8 00:30:01.488124 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:30:01.488598 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:30:01.507904 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:30:01.511786 auditctl[1736]: No rules Nov 8 00:30:01.513134 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:30:01.514070 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:30:01.521912 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:30:01.562522 augenrules[1755]: No rules Nov 8 00:30:01.562940 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:30:01.566029 sudo[1732]: pam_unix(sudo:session): session closed for user root Nov 8 00:30:01.731309 sshd[1729]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:01.736776 systemd[1]: sshd@5-46.62.239.97:22-147.75.109.163:45238.service: Deactivated successfully. Nov 8 00:30:01.739139 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:30:01.740978 systemd-logind[1477]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:30:01.742529 systemd-logind[1477]: Removed session 6. Nov 8 00:30:01.911178 systemd[1]: Started sshd@6-46.62.239.97:22-147.75.109.163:55446.service - OpenSSH per-connection server daemon (147.75.109.163:55446). Nov 8 00:30:02.921226 sshd[1763]: Accepted publickey for core from 147.75.109.163 port 55446 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:30:02.923437 sshd[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:02.931497 systemd-logind[1477]: New session 7 of user core. Nov 8 00:30:02.946918 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:30:03.458144 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:30:03.458644 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:30:03.844511 (dockerd)[1783]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:30:03.844555 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:30:04.273013 dockerd[1783]: time="2025-11-08T00:30:04.272944683Z" level=info msg="Starting up" Nov 8 00:30:04.398319 dockerd[1783]: time="2025-11-08T00:30:04.398253869Z" level=info msg="Loading containers: start." Nov 8 00:30:04.533538 kernel: Initializing XFRM netlink socket Nov 8 00:30:04.648951 systemd-networkd[1395]: docker0: Link UP Nov 8 00:30:04.674445 dockerd[1783]: time="2025-11-08T00:30:04.674236835Z" level=info msg="Loading containers: done." Nov 8 00:30:04.696486 dockerd[1783]: time="2025-11-08T00:30:04.696392352Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:30:04.696676 dockerd[1783]: time="2025-11-08T00:30:04.696556753Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:30:04.696676 dockerd[1783]: time="2025-11-08T00:30:04.696669627Z" level=info msg="Daemon has completed initialization" Nov 8 00:30:04.746830 dockerd[1783]: time="2025-11-08T00:30:04.746744344Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:30:04.747176 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:30:06.218196 containerd[1513]: time="2025-11-08T00:30:06.218121546Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 8 00:30:06.848969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2755939913.mount: Deactivated successfully. Nov 8 00:30:07.437995 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 8 00:30:07.444655 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:30:07.562065 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:30:07.565630 (kubelet)[1986]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:30:07.606927 kubelet[1986]: E1108 00:30:07.606603 1986 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:30:07.609159 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:30:07.609335 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:30:07.956050 containerd[1513]: time="2025-11-08T00:30:07.955961798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:07.960318 containerd[1513]: time="2025-11-08T00:30:07.960030689Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114993" Nov 8 00:30:07.964522 containerd[1513]: time="2025-11-08T00:30:07.964488535Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:07.968586 containerd[1513]: time="2025-11-08T00:30:07.968522109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:07.970036 containerd[1513]: time="2025-11-08T00:30:07.969869405Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 1.751704537s" Nov 8 00:30:07.970036 containerd[1513]: time="2025-11-08T00:30:07.969905333Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 8 00:30:07.970885 containerd[1513]: time="2025-11-08T00:30:07.970844307Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 8 00:30:09.178744 containerd[1513]: time="2025-11-08T00:30:09.178674875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:09.180121 containerd[1513]: time="2025-11-08T00:30:09.180071121Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020866" Nov 8 00:30:09.181851 containerd[1513]: time="2025-11-08T00:30:09.181795717Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:09.185829 containerd[1513]: time="2025-11-08T00:30:09.185775192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:09.187456 containerd[1513]: time="2025-11-08T00:30:09.187295503Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.216408505s" Nov 8 00:30:09.187456 containerd[1513]: time="2025-11-08T00:30:09.187332322Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 8 00:30:09.188152 containerd[1513]: time="2025-11-08T00:30:09.188113777Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 8 00:30:10.364818 containerd[1513]: time="2025-11-08T00:30:10.364672368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:10.366805 containerd[1513]: time="2025-11-08T00:30:10.366564780Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155590" Nov 8 00:30:10.370433 containerd[1513]: time="2025-11-08T00:30:10.368238159Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:10.372202 containerd[1513]: time="2025-11-08T00:30:10.372151845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:10.373252 containerd[1513]: time="2025-11-08T00:30:10.373197599Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.185047202s" Nov 8 00:30:10.373296 containerd[1513]: time="2025-11-08T00:30:10.373258153Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 8 00:30:10.374001 containerd[1513]: time="2025-11-08T00:30:10.373961761Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 8 00:30:11.415060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount42643752.mount: Deactivated successfully. Nov 8 00:30:11.806617 containerd[1513]: time="2025-11-08T00:30:11.806539248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:11.808000 containerd[1513]: time="2025-11-08T00:30:11.807813844Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929497" Nov 8 00:30:11.810247 containerd[1513]: time="2025-11-08T00:30:11.809203886Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:11.812169 containerd[1513]: time="2025-11-08T00:30:11.811398997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:11.812169 containerd[1513]: time="2025-11-08T00:30:11.812030809Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.43802722s" Nov 8 00:30:11.812169 containerd[1513]: time="2025-11-08T00:30:11.812067759Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 8 00:30:11.812728 containerd[1513]: time="2025-11-08T00:30:11.812688731Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 8 00:30:12.359371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2521444868.mount: Deactivated successfully. Nov 8 00:30:13.367842 containerd[1513]: time="2025-11-08T00:30:13.367769154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:13.369382 containerd[1513]: time="2025-11-08T00:30:13.369338513Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942332" Nov 8 00:30:13.371360 containerd[1513]: time="2025-11-08T00:30:13.370964698Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:13.374585 containerd[1513]: time="2025-11-08T00:30:13.374548756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:13.375590 containerd[1513]: time="2025-11-08T00:30:13.375571784Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.562855702s" Nov 8 00:30:13.375626 containerd[1513]: time="2025-11-08T00:30:13.375598013Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 8 00:30:13.376358 containerd[1513]: time="2025-11-08T00:30:13.376327249Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 8 00:30:13.901951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3265140207.mount: Deactivated successfully. Nov 8 00:30:13.912354 containerd[1513]: time="2025-11-08T00:30:13.912276085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:13.913717 containerd[1513]: time="2025-11-08T00:30:13.913644204Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Nov 8 00:30:13.915127 containerd[1513]: time="2025-11-08T00:30:13.915059352Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:13.919387 containerd[1513]: time="2025-11-08T00:30:13.919303424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:13.922031 containerd[1513]: time="2025-11-08T00:30:13.920598405Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 544.242412ms" Nov 8 00:30:13.922031 containerd[1513]: time="2025-11-08T00:30:13.920644893Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 8 00:30:13.922031 containerd[1513]: time="2025-11-08T00:30:13.921238161Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 8 00:30:14.418858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3352669348.mount: Deactivated successfully. Nov 8 00:30:16.194541 containerd[1513]: time="2025-11-08T00:30:16.194473727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:16.196384 containerd[1513]: time="2025-11-08T00:30:16.196004660Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378491" Nov 8 00:30:16.199454 containerd[1513]: time="2025-11-08T00:30:16.197680797Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:16.202681 containerd[1513]: time="2025-11-08T00:30:16.202037043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:16.203841 containerd[1513]: time="2025-11-08T00:30:16.203785555Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.282514833s" Nov 8 00:30:16.203904 containerd[1513]: time="2025-11-08T00:30:16.203847252Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 8 00:30:17.687705 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Nov 8 00:30:17.699595 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:30:17.917576 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:30:17.930219 (kubelet)[2150]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:30:18.001048 kubelet[2150]: E1108 00:30:17.999502 2150 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:30:18.005004 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:30:18.005170 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:30:19.631930 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:30:19.642037 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:30:19.677485 systemd[1]: Reloading requested from client PID 2164 ('systemctl') (unit session-7.scope)... Nov 8 00:30:19.677687 systemd[1]: Reloading... Nov 8 00:30:19.786745 zram_generator::config[2201]: No configuration found. Nov 8 00:30:19.906023 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:30:19.987085 systemd[1]: Reloading finished in 308 ms. Nov 8 00:30:20.029091 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:30:20.029156 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:30:20.029366 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:30:20.035620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:30:20.173029 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:30:20.182814 (kubelet)[2258]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:30:20.228226 kubelet[2258]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:30:20.228226 kubelet[2258]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:30:20.228226 kubelet[2258]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:30:20.228852 kubelet[2258]: I1108 00:30:20.228248 2258 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:30:20.752007 kubelet[2258]: I1108 00:30:20.751949 2258 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 8 00:30:20.752007 kubelet[2258]: I1108 00:30:20.751979 2258 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:30:20.752204 kubelet[2258]: I1108 00:30:20.752186 2258 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:30:20.801633 kubelet[2258]: E1108 00:30:20.801546 2258 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://46.62.239.97:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 46.62.239.97:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 8 00:30:20.802011 kubelet[2258]: I1108 00:30:20.801811 2258 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:30:20.823013 kubelet[2258]: E1108 00:30:20.822963 2258 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:30:20.823183 kubelet[2258]: I1108 00:30:20.823173 2258 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:30:20.836628 kubelet[2258]: I1108 00:30:20.836598 2258 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:30:20.839763 kubelet[2258]: I1108 00:30:20.839718 2258 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:30:20.843568 kubelet[2258]: I1108 00:30:20.839754 2258 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-dcea41702a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:30:20.843568 kubelet[2258]: I1108 00:30:20.843564 2258 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:30:20.843568 kubelet[2258]: I1108 00:30:20.843575 2258 container_manager_linux.go:303] "Creating device plugin manager" Nov 8 00:30:20.845036 kubelet[2258]: I1108 00:30:20.844985 2258 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:30:20.847941 kubelet[2258]: I1108 00:30:20.847818 2258 kubelet.go:480] "Attempting to sync node with API server" Nov 8 00:30:20.847941 kubelet[2258]: I1108 00:30:20.847864 2258 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:30:20.849221 kubelet[2258]: I1108 00:30:20.848680 2258 kubelet.go:386] "Adding apiserver pod source" Nov 8 00:30:20.849221 kubelet[2258]: I1108 00:30:20.848698 2258 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:30:20.863154 kubelet[2258]: E1108 00:30:20.863099 2258 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://46.62.239.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-dcea41702a&limit=500&resourceVersion=0\": dial tcp 46.62.239.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:30:20.865908 kubelet[2258]: I1108 00:30:20.865842 2258 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:30:20.867813 kubelet[2258]: I1108 00:30:20.866387 2258 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:30:20.869852 kubelet[2258]: W1108 00:30:20.868735 2258 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:30:20.873530 kubelet[2258]: I1108 00:30:20.873506 2258 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:30:20.873606 kubelet[2258]: I1108 00:30:20.873565 2258 server.go:1289] "Started kubelet" Nov 8 00:30:20.877166 kubelet[2258]: E1108 00:30:20.877131 2258 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://46.62.239.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.62.239.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:30:20.879791 kubelet[2258]: I1108 00:30:20.879746 2258 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:30:20.880797 kubelet[2258]: I1108 00:30:20.880738 2258 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:30:20.882345 kubelet[2258]: I1108 00:30:20.882312 2258 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:30:20.882747 kubelet[2258]: I1108 00:30:20.882728 2258 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:30:20.892687 kubelet[2258]: E1108 00:30:20.885596 2258 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://46.62.239.97:6443/api/v1/namespaces/default/events\": dial tcp 46.62.239.97:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-dcea41702a.1875e0a33bd52355 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-dcea41702a,UID:ci-4081-3-6-n-dcea41702a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-dcea41702a,},FirstTimestamp:2025-11-08 00:30:20.873532245 +0000 UTC m=+0.686041916,LastTimestamp:2025-11-08 00:30:20.873532245 +0000 UTC m=+0.686041916,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-dcea41702a,}" Nov 8 00:30:20.892687 kubelet[2258]: I1108 00:30:20.892696 2258 server.go:317] "Adding debug handlers to kubelet server" Nov 8 00:30:20.895159 kubelet[2258]: I1108 00:30:20.895128 2258 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:30:20.898097 kubelet[2258]: I1108 00:30:20.898047 2258 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:30:20.899099 kubelet[2258]: E1108 00:30:20.899077 2258 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-dcea41702a\" not found" Nov 8 00:30:20.900852 kubelet[2258]: I1108 00:30:20.900814 2258 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:30:20.900996 kubelet[2258]: I1108 00:30:20.900985 2258 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:30:20.902359 kubelet[2258]: E1108 00:30:20.902332 2258 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://46.62.239.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 46.62.239.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:30:20.903384 kubelet[2258]: E1108 00:30:20.903348 2258 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.239.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-dcea41702a?timeout=10s\": dial tcp 46.62.239.97:6443: connect: connection refused" interval="200ms" Nov 8 00:30:20.904758 kubelet[2258]: I1108 00:30:20.904738 2258 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:30:20.905472 kubelet[2258]: I1108 00:30:20.905451 2258 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:30:20.908332 kubelet[2258]: I1108 00:30:20.908283 2258 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:30:20.917455 kubelet[2258]: I1108 00:30:20.916981 2258 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 8 00:30:20.918957 kubelet[2258]: I1108 00:30:20.918931 2258 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 8 00:30:20.918957 kubelet[2258]: I1108 00:30:20.918959 2258 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 8 00:30:20.919029 kubelet[2258]: I1108 00:30:20.918985 2258 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:30:20.919029 kubelet[2258]: I1108 00:30:20.918993 2258 kubelet.go:2436] "Starting kubelet main sync loop" Nov 8 00:30:20.919150 kubelet[2258]: E1108 00:30:20.919042 2258 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:30:20.932927 kubelet[2258]: E1108 00:30:20.932892 2258 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://46.62.239.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 46.62.239.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:30:20.939269 kubelet[2258]: E1108 00:30:20.939065 2258 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:30:20.947112 kubelet[2258]: I1108 00:30:20.947084 2258 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:30:20.947112 kubelet[2258]: I1108 00:30:20.947103 2258 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:30:20.947112 kubelet[2258]: I1108 00:30:20.947124 2258 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:30:20.950075 kubelet[2258]: I1108 00:30:20.950046 2258 policy_none.go:49] "None policy: Start" Nov 8 00:30:20.950075 kubelet[2258]: I1108 00:30:20.950065 2258 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:30:20.950075 kubelet[2258]: I1108 00:30:20.950074 2258 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:30:20.958295 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 8 00:30:20.966516 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 8 00:30:20.969708 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 8 00:30:20.980215 kubelet[2258]: E1108 00:30:20.980186 2258 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:30:20.980787 kubelet[2258]: I1108 00:30:20.980366 2258 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:30:20.980787 kubelet[2258]: I1108 00:30:20.980388 2258 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:30:20.980787 kubelet[2258]: I1108 00:30:20.980644 2258 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:30:20.982993 kubelet[2258]: E1108 00:30:20.982963 2258 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:30:20.983063 kubelet[2258]: E1108 00:30:20.983010 2258 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-dcea41702a\" not found" Nov 8 00:30:21.039785 systemd[1]: Created slice kubepods-burstable-podf2746dbe9cd2a311acb13c57a2d1d5f4.slice - libcontainer container kubepods-burstable-podf2746dbe9cd2a311acb13c57a2d1d5f4.slice. Nov 8 00:30:21.053071 kubelet[2258]: E1108 00:30:21.052984 2258 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-dcea41702a\" not found" node="ci-4081-3-6-n-dcea41702a" Nov 8 00:30:21.058391 systemd[1]: Created slice kubepods-burstable-pod4f953143db92a0d7ff2d41f5d4af04b4.slice - libcontainer container kubepods-burstable-pod4f953143db92a0d7ff2d41f5d4af04b4.slice. Nov 8 00:30:21.072004 kubelet[2258]: E1108 00:30:21.071958 2258 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-dcea41702a\" not found" node="ci-4081-3-6-n-dcea41702a" Nov 8 00:30:21.078158 systemd[1]: Created slice kubepods-burstable-pod17287cb36cb8f1d5c433143fe1b9ec18.slice - libcontainer container kubepods-burstable-pod17287cb36cb8f1d5c433143fe1b9ec18.slice. Nov 8 00:30:21.080627 kubelet[2258]: E1108 00:30:21.080590 2258 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-dcea41702a\" not found" node="ci-4081-3-6-n-dcea41702a" Nov 8 00:30:21.082458 kubelet[2258]: I1108 00:30:21.082387 2258 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-dcea41702a" Nov 8 00:30:21.082866 kubelet[2258]: E1108 00:30:21.082815 2258 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.239.97:6443/api/v1/nodes\": dial tcp 46.62.239.97:6443: connect: connection refused" node="ci-4081-3-6-n-dcea41702a" Nov 8 00:30:21.105596 kubelet[2258]: I1108 00:30:21.102487 2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f2746dbe9cd2a311acb13c57a2d1d5f4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-dcea41702a\" (UID: \"f2746dbe9cd2a311acb13c57a2d1d5f4\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-dcea41702a" Nov 8 00:30:21.105596 kubelet[2258]: I1108 00:30:21.102522 2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f953143db92a0d7ff2d41f5d4af04b4-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-dcea41702a\" (UID: \"4f953143db92a0d7ff2d41f5d4af04b4\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-dcea41702a" Nov 8 00:30:21.105596 kubelet[2258]: I1108 00:30:21.102544 2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f953143db92a0d7ff2d41f5d4af04b4-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-dcea41702a\" (UID: \"4f953143db92a0d7ff2d41f5d4af04b4\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-dcea41702a" Nov 8 00:30:21.105596 kubelet[2258]: I1108 00:30:21.102564 2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f953143db92a0d7ff2d41f5d4af04b4-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-dcea41702a\" (UID: \"4f953143db92a0d7ff2d41f5d4af04b4\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-dcea41702a" Nov 8 00:30:21.105596 kubelet[2258]: I1108 00:30:21.102584 2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f953143db92a0d7ff2d41f5d4af04b4-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-dcea41702a\" (UID: \"4f953143db92a0d7ff2d41f5d4af04b4\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-dcea41702a" Nov 8 00:30:21.105898 kubelet[2258]: I1108 00:30:21.102605 2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f953143db92a0d7ff2d41f5d4af04b4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-dcea41702a\" (UID: \"4f953143db92a0d7ff2d41f5d4af04b4\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-dcea41702a" Nov 8 00:30:21.105898 kubelet[2258]: I1108 00:30:21.102627 2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/17287cb36cb8f1d5c433143fe1b9ec18-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-dcea41702a\" (UID: \"17287cb36cb8f1d5c433143fe1b9ec18\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-dcea41702a" Nov 8 00:30:21.105898 kubelet[2258]: I1108 00:30:21.102647 2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f2746dbe9cd2a311acb13c57a2d1d5f4-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-dcea41702a\" (UID: \"f2746dbe9cd2a311acb13c57a2d1d5f4\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-dcea41702a" Nov 8 00:30:21.105898 kubelet[2258]: I1108 00:30:21.102674 2258 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f2746dbe9cd2a311acb13c57a2d1d5f4-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-dcea41702a\" (UID: \"f2746dbe9cd2a311acb13c57a2d1d5f4\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-dcea41702a" Nov 8 00:30:21.105898 kubelet[2258]: E1108 00:30:21.104706 2258 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.239.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-dcea41702a?timeout=10s\": dial tcp 46.62.239.97:6443: connect: connection refused" interval="400ms" Nov 8 00:30:21.117535 kubelet[2258]: E1108 00:30:21.117404 2258 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://46.62.239.97:6443/api/v1/namespaces/default/events\": dial tcp 46.62.239.97:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-dcea41702a.1875e0a33bd52355 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-dcea41702a,UID:ci-4081-3-6-n-dcea41702a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-dcea41702a,},FirstTimestamp:2025-11-08 00:30:20.873532245 +0000 UTC m=+0.686041916,LastTimestamp:2025-11-08 00:30:20.873532245 +0000 UTC m=+0.686041916,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-dcea41702a,}" Nov 8 00:30:21.286534 kubelet[2258]: I1108 00:30:21.286476 2258 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-dcea41702a" Nov 8 00:30:21.287127 kubelet[2258]: E1108 00:30:21.286986 2258 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.239.97:6443/api/v1/nodes\": dial tcp 46.62.239.97:6443: connect: connection refused" node="ci-4081-3-6-n-dcea41702a" Nov 8 00:30:21.355131 containerd[1513]: time="2025-11-08T00:30:21.354970796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-dcea41702a,Uid:f2746dbe9cd2a311acb13c57a2d1d5f4,Namespace:kube-system,Attempt:0,}" Nov 8 00:30:21.379439 containerd[1513]: time="2025-11-08T00:30:21.379342345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-dcea41702a,Uid:4f953143db92a0d7ff2d41f5d4af04b4,Namespace:kube-system,Attempt:0,}" Nov 8 00:30:21.382238 containerd[1513]: time="2025-11-08T00:30:21.381796071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-dcea41702a,Uid:17287cb36cb8f1d5c433143fe1b9ec18,Namespace:kube-system,Attempt:0,}" Nov 8 00:30:21.506157 kubelet[2258]: E1108 00:30:21.506077 2258 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.239.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-dcea41702a?timeout=10s\": dial tcp 46.62.239.97:6443: connect: connection refused" interval="800ms" Nov 8 00:30:21.691816 kubelet[2258]: I1108 00:30:21.691759 2258 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-dcea41702a" Nov 8 00:30:21.693387 kubelet[2258]: E1108 00:30:21.693193 2258 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.239.97:6443/api/v1/nodes\": dial tcp 46.62.239.97:6443: connect: connection refused" node="ci-4081-3-6-n-dcea41702a" Nov 8 00:30:21.861892 kubelet[2258]: E1108 00:30:21.861844 2258 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://46.62.239.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 46.62.239.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:30:21.873616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount238379518.mount: Deactivated successfully. Nov 8 00:30:21.887119 containerd[1513]: time="2025-11-08T00:30:21.886935279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:30:21.889065 containerd[1513]: time="2025-11-08T00:30:21.888974385Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:30:21.890913 containerd[1513]: time="2025-11-08T00:30:21.890777117Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Nov 8 00:30:21.892051 containerd[1513]: time="2025-11-08T00:30:21.891987544Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:30:21.893249 containerd[1513]: time="2025-11-08T00:30:21.893175950Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:30:21.895047 containerd[1513]: time="2025-11-08T00:30:21.894962581Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:30:21.897769 containerd[1513]: time="2025-11-08T00:30:21.897676016Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:30:21.899222 containerd[1513]: time="2025-11-08T00:30:21.899129470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:30:21.901339 containerd[1513]: time="2025-11-08T00:30:21.901300604Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 519.382653ms" Nov 8 00:30:21.903397 containerd[1513]: time="2025-11-08T00:30:21.903351943Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 523.892146ms" Nov 8 00:30:21.904076 containerd[1513]: time="2025-11-08T00:30:21.904033324Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 548.930801ms" Nov 8 00:30:22.073451 kubelet[2258]: E1108 00:30:22.073260 2258 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://46.62.239.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 46.62.239.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:30:22.082124 containerd[1513]: time="2025-11-08T00:30:22.081877632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:30:22.082124 containerd[1513]: time="2025-11-08T00:30:22.081925452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:30:22.082124 containerd[1513]: time="2025-11-08T00:30:22.081939298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:22.082124 containerd[1513]: time="2025-11-08T00:30:22.082011624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:22.087439 containerd[1513]: time="2025-11-08T00:30:22.086806392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:30:22.087439 containerd[1513]: time="2025-11-08T00:30:22.086864091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:30:22.087439 containerd[1513]: time="2025-11-08T00:30:22.086877446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:22.087439 containerd[1513]: time="2025-11-08T00:30:22.086939132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:22.088292 containerd[1513]: time="2025-11-08T00:30:22.088125353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:30:22.088292 containerd[1513]: time="2025-11-08T00:30:22.088171339Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:30:22.088292 containerd[1513]: time="2025-11-08T00:30:22.088185686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:22.088292 containerd[1513]: time="2025-11-08T00:30:22.088236061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:22.113571 systemd[1]: Started cri-containerd-bd4b1c431457570025c28a81a1dff3f65de4098c112bfd6bbc4f6f52f97fb8b6.scope - libcontainer container bd4b1c431457570025c28a81a1dff3f65de4098c112bfd6bbc4f6f52f97fb8b6. Nov 8 00:30:22.125821 systemd[1]: Started cri-containerd-cc4dc884f230fe0a74ababf82bd76d9eb417094bf71c470be96e6d15e0e54cb4.scope - libcontainer container cc4dc884f230fe0a74ababf82bd76d9eb417094bf71c470be96e6d15e0e54cb4. Nov 8 00:30:22.128857 systemd[1]: Started cri-containerd-e7362f971c84dd6311739dab46923c196f572e2a8737a9083773fba0676cafd5.scope - libcontainer container e7362f971c84dd6311739dab46923c196f572e2a8737a9083773fba0676cafd5. Nov 8 00:30:22.180638 containerd[1513]: time="2025-11-08T00:30:22.179508773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-dcea41702a,Uid:4f953143db92a0d7ff2d41f5d4af04b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd4b1c431457570025c28a81a1dff3f65de4098c112bfd6bbc4f6f52f97fb8b6\"" Nov 8 00:30:22.196986 containerd[1513]: time="2025-11-08T00:30:22.196883812Z" level=info msg="CreateContainer within sandbox \"bd4b1c431457570025c28a81a1dff3f65de4098c112bfd6bbc4f6f52f97fb8b6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:30:22.205325 containerd[1513]: time="2025-11-08T00:30:22.205179976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-dcea41702a,Uid:f2746dbe9cd2a311acb13c57a2d1d5f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7362f971c84dd6311739dab46923c196f572e2a8737a9083773fba0676cafd5\"" Nov 8 00:30:22.213246 containerd[1513]: time="2025-11-08T00:30:22.213085336Z" level=info msg="CreateContainer within sandbox \"e7362f971c84dd6311739dab46923c196f572e2a8737a9083773fba0676cafd5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:30:22.224618 containerd[1513]: time="2025-11-08T00:30:22.224203547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-dcea41702a,Uid:17287cb36cb8f1d5c433143fe1b9ec18,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc4dc884f230fe0a74ababf82bd76d9eb417094bf71c470be96e6d15e0e54cb4\"" Nov 8 00:30:22.235548 containerd[1513]: time="2025-11-08T00:30:22.235372345Z" level=info msg="CreateContainer within sandbox \"cc4dc884f230fe0a74ababf82bd76d9eb417094bf71c470be96e6d15e0e54cb4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:30:22.243057 containerd[1513]: time="2025-11-08T00:30:22.242996014Z" level=info msg="CreateContainer within sandbox \"bd4b1c431457570025c28a81a1dff3f65de4098c112bfd6bbc4f6f52f97fb8b6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"da8e36063db927b80a105db9203083c2c3c5dcd9f67ff5dc24affecc2c5bd2b6\"" Nov 8 00:30:22.243901 containerd[1513]: time="2025-11-08T00:30:22.243820534Z" level=info msg="StartContainer for \"da8e36063db927b80a105db9203083c2c3c5dcd9f67ff5dc24affecc2c5bd2b6\"" Nov 8 00:30:22.251952 containerd[1513]: time="2025-11-08T00:30:22.251889821Z" level=info msg="CreateContainer within sandbox \"e7362f971c84dd6311739dab46923c196f572e2a8737a9083773fba0676cafd5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"236350586cb6fbe530ba6c747364eef3254db59b0c58df8f74fb299cfdbbd50c\"" Nov 8 00:30:22.253286 containerd[1513]: time="2025-11-08T00:30:22.253101791Z" level=info msg="StartContainer for \"236350586cb6fbe530ba6c747364eef3254db59b0c58df8f74fb299cfdbbd50c\"" Nov 8 00:30:22.265217 kubelet[2258]: E1108 00:30:22.265169 2258 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://46.62.239.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.62.239.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:30:22.273034 containerd[1513]: time="2025-11-08T00:30:22.272817513Z" level=info msg="CreateContainer within sandbox \"cc4dc884f230fe0a74ababf82bd76d9eb417094bf71c470be96e6d15e0e54cb4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3f5f8d9b13c47ca392a1e6e479eaff47e391d0adc32db9f03833179a8ae5c49f\"" Nov 8 00:30:22.276447 containerd[1513]: time="2025-11-08T00:30:22.275027801Z" level=info msg="StartContainer for \"3f5f8d9b13c47ca392a1e6e479eaff47e391d0adc32db9f03833179a8ae5c49f\"" Nov 8 00:30:22.277925 systemd[1]: Started cri-containerd-da8e36063db927b80a105db9203083c2c3c5dcd9f67ff5dc24affecc2c5bd2b6.scope - libcontainer container da8e36063db927b80a105db9203083c2c3c5dcd9f67ff5dc24affecc2c5bd2b6. Nov 8 00:30:22.287568 systemd[1]: Started cri-containerd-236350586cb6fbe530ba6c747364eef3254db59b0c58df8f74fb299cfdbbd50c.scope - libcontainer container 236350586cb6fbe530ba6c747364eef3254db59b0c58df8f74fb299cfdbbd50c. Nov 8 00:30:22.307087 kubelet[2258]: E1108 00:30:22.306986 2258 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.239.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-dcea41702a?timeout=10s\": dial tcp 46.62.239.97:6443: connect: connection refused" interval="1.6s" Nov 8 00:30:22.324556 systemd[1]: Started cri-containerd-3f5f8d9b13c47ca392a1e6e479eaff47e391d0adc32db9f03833179a8ae5c49f.scope - libcontainer container 3f5f8d9b13c47ca392a1e6e479eaff47e391d0adc32db9f03833179a8ae5c49f. Nov 8 00:30:22.360451 containerd[1513]: time="2025-11-08T00:30:22.359891768Z" level=info msg="StartContainer for \"da8e36063db927b80a105db9203083c2c3c5dcd9f67ff5dc24affecc2c5bd2b6\" returns successfully" Nov 8 00:30:22.378902 containerd[1513]: time="2025-11-08T00:30:22.378814559Z" level=info msg="StartContainer for \"236350586cb6fbe530ba6c747364eef3254db59b0c58df8f74fb299cfdbbd50c\" returns successfully" Nov 8 00:30:22.413213 containerd[1513]: time="2025-11-08T00:30:22.413135262Z" level=info msg="StartContainer for \"3f5f8d9b13c47ca392a1e6e479eaff47e391d0adc32db9f03833179a8ae5c49f\" returns successfully" Nov 8 00:30:22.429904 kubelet[2258]: E1108 00:30:22.429813 2258 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://46.62.239.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-dcea41702a&limit=500&resourceVersion=0\": dial tcp 46.62.239.97:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:30:22.498112 kubelet[2258]: I1108 00:30:22.497623 2258 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-dcea41702a" Nov 8 00:30:22.498112 kubelet[2258]: E1108 00:30:22.498012 2258 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.239.97:6443/api/v1/nodes\": dial tcp 46.62.239.97:6443: connect: connection refused" node="ci-4081-3-6-n-dcea41702a" Nov 8 00:30:22.954629 kubelet[2258]: E1108 00:30:22.954587 2258 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-dcea41702a\" not found" node="ci-4081-3-6-n-dcea41702a" Nov 8 00:30:22.957723 kubelet[2258]: E1108 00:30:22.957698 2258 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-dcea41702a\" not found" node="ci-4081-3-6-n-dcea41702a" Nov 8 00:30:22.959633 kubelet[2258]: E1108 00:30:22.959607 2258 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-dcea41702a\" not found" node="ci-4081-3-6-n-dcea41702a" Nov 8 00:30:23.965371 kubelet[2258]: E1108 00:30:23.965306 2258 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-dcea41702a\" not found" node="ci-4081-3-6-n-dcea41702a" Nov 8 00:30:23.972108 kubelet[2258]: E1108 00:30:23.972063 2258 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-dcea41702a\" not found" node="ci-4081-3-6-n-dcea41702a" Nov 8 00:30:24.102360 kubelet[2258]: I1108 00:30:24.102303 2258 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-dcea41702a" Nov 8 00:30:25.219326 kubelet[2258]: E1108 00:30:25.218499 2258 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-n-dcea41702a\" not found" node="ci-4081-3-6-n-dcea41702a" Nov 8 00:30:25.307532 kubelet[2258]: I1108 00:30:25.307447 2258 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-dcea41702a" Nov 8 00:30:25.307771 kubelet[2258]: E1108 00:30:25.307519 2258 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-n-dcea41702a\": node \"ci-4081-3-6-n-dcea41702a\" not found" Nov 8 00:30:25.329860 kubelet[2258]: E1108 00:30:25.329772 2258 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-dcea41702a\" not found" Nov 8 00:30:25.430978 kubelet[2258]: E1108 00:30:25.430877 2258 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-dcea41702a\" not found" Nov 8 00:30:25.502061 kubelet[2258]: I1108 00:30:25.501760 2258 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-dcea41702a" Nov 8 00:30:25.515068 kubelet[2258]: E1108 00:30:25.514713 2258 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-dcea41702a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-dcea41702a" Nov 8 00:30:25.515068 kubelet[2258]: I1108 00:30:25.514775 2258 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-dcea41702a" Nov 8 00:30:25.518778 kubelet[2258]: E1108 00:30:25.518755 2258 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-dcea41702a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-dcea41702a" Nov 8 00:30:25.518956 kubelet[2258]: I1108 00:30:25.518886 2258 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-dcea41702a" Nov 8 00:30:25.521332 kubelet[2258]: E1108 00:30:25.521277 2258 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-dcea41702a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-dcea41702a" Nov 8 00:30:25.874401 kubelet[2258]: I1108 00:30:25.874191 2258 apiserver.go:52] "Watching apiserver" Nov 8 00:30:25.901958 kubelet[2258]: I1108 00:30:25.901730 2258 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:30:27.511606 kubelet[2258]: I1108 00:30:27.511549 2258 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-dcea41702a" Nov 8 00:30:27.685044 systemd[1]: Reloading requested from client PID 2544 ('systemctl') (unit session-7.scope)... Nov 8 00:30:27.685072 systemd[1]: Reloading... Nov 8 00:30:27.821494 zram_generator::config[2584]: No configuration found. Nov 8 00:30:27.946041 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:30:28.033792 systemd[1]: Reloading finished in 347 ms. Nov 8 00:30:28.068749 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:30:28.090972 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:30:28.091704 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:30:28.092024 systemd[1]: kubelet.service: Consumed 1.206s CPU time, 128.6M memory peak, 0B memory swap peak. Nov 8 00:30:28.097674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:30:28.245178 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:30:28.255715 (kubelet)[2635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:30:28.308916 kubelet[2635]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:30:28.308916 kubelet[2635]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:30:28.308916 kubelet[2635]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:30:28.309392 kubelet[2635]: I1108 00:30:28.308972 2635 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:30:28.321389 kubelet[2635]: I1108 00:30:28.321331 2635 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 8 00:30:28.321389 kubelet[2635]: I1108 00:30:28.321367 2635 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:30:28.329325 kubelet[2635]: I1108 00:30:28.329167 2635 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:30:28.333306 kubelet[2635]: I1108 00:30:28.333270 2635 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 8 00:30:28.344899 kubelet[2635]: I1108 00:30:28.344501 2635 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:30:28.350395 kubelet[2635]: E1108 00:30:28.350301 2635 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:30:28.350395 kubelet[2635]: I1108 00:30:28.350391 2635 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:30:28.354109 kubelet[2635]: I1108 00:30:28.354077 2635 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:30:28.354485 kubelet[2635]: I1108 00:30:28.354427 2635 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:30:28.354722 kubelet[2635]: I1108 00:30:28.354477 2635 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-dcea41702a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:30:28.354827 kubelet[2635]: I1108 00:30:28.354727 2635 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:30:28.354827 kubelet[2635]: I1108 00:30:28.354743 2635 container_manager_linux.go:303] "Creating device plugin manager" Nov 8 00:30:28.354827 kubelet[2635]: I1108 00:30:28.354800 2635 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:30:28.355056 kubelet[2635]: I1108 00:30:28.355008 2635 kubelet.go:480] "Attempting to sync node with API server" Nov 8 00:30:28.355056 kubelet[2635]: I1108 00:30:28.355034 2635 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:30:28.356441 kubelet[2635]: I1108 00:30:28.356199 2635 kubelet.go:386] "Adding apiserver pod source" Nov 8 00:30:28.357505 kubelet[2635]: I1108 00:30:28.357481 2635 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:30:28.362451 kubelet[2635]: I1108 00:30:28.362317 2635 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:30:28.364437 kubelet[2635]: I1108 00:30:28.363580 2635 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:30:28.372481 kubelet[2635]: I1108 00:30:28.371895 2635 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:30:28.372481 kubelet[2635]: I1108 00:30:28.371996 2635 server.go:1289] "Started kubelet" Nov 8 00:30:28.381120 kubelet[2635]: I1108 00:30:28.381078 2635 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:30:28.386621 kubelet[2635]: I1108 00:30:28.386034 2635 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:30:28.387131 kubelet[2635]: I1108 00:30:28.386830 2635 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:30:28.394239 kubelet[2635]: I1108 00:30:28.394155 2635 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:30:28.396956 kubelet[2635]: I1108 00:30:28.396821 2635 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:30:28.397893 kubelet[2635]: E1108 00:30:28.397762 2635 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-dcea41702a\" not found" Nov 8 00:30:28.404011 kubelet[2635]: I1108 00:30:28.403954 2635 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:30:28.404167 kubelet[2635]: I1108 00:30:28.404138 2635 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:30:28.408514 kubelet[2635]: I1108 00:30:28.407735 2635 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:30:28.414763 kubelet[2635]: I1108 00:30:28.414514 2635 server.go:317] "Adding debug handlers to kubelet server" Nov 8 00:30:28.414954 kubelet[2635]: I1108 00:30:28.414926 2635 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:30:28.415625 kubelet[2635]: I1108 00:30:28.415086 2635 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:30:28.421002 kubelet[2635]: I1108 00:30:28.420827 2635 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:30:28.428070 kubelet[2635]: I1108 00:30:28.428010 2635 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 8 00:30:28.429538 kubelet[2635]: I1108 00:30:28.429521 2635 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 8 00:30:28.429654 kubelet[2635]: I1108 00:30:28.429642 2635 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 8 00:30:28.429744 kubelet[2635]: I1108 00:30:28.429731 2635 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:30:28.429805 kubelet[2635]: I1108 00:30:28.429795 2635 kubelet.go:2436] "Starting kubelet main sync loop" Nov 8 00:30:28.430268 kubelet[2635]: E1108 00:30:28.429940 2635 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:30:28.489457 kubelet[2635]: I1108 00:30:28.488534 2635 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:30:28.489457 kubelet[2635]: I1108 00:30:28.488558 2635 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:30:28.489457 kubelet[2635]: I1108 00:30:28.488588 2635 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:30:28.489457 kubelet[2635]: I1108 00:30:28.488833 2635 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:30:28.489457 kubelet[2635]: I1108 00:30:28.488866 2635 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:30:28.489457 kubelet[2635]: I1108 00:30:28.488890 2635 policy_none.go:49] "None policy: Start" Nov 8 00:30:28.489457 kubelet[2635]: I1108 00:30:28.488902 2635 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:30:28.489457 kubelet[2635]: I1108 00:30:28.488915 2635 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:30:28.489457 kubelet[2635]: I1108 00:30:28.489036 2635 state_mem.go:75] "Updated machine memory state" Nov 8 00:30:28.494371 kubelet[2635]: E1108 00:30:28.494340 2635 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:30:28.495831 kubelet[2635]: I1108 00:30:28.495762 2635 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:30:28.495955 kubelet[2635]: I1108 00:30:28.495915 2635 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:30:28.496520 kubelet[2635]: I1108 00:30:28.496494 2635 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:30:28.498828 kubelet[2635]: E1108 00:30:28.498803 2635 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:30:28.531737 kubelet[2635]: I1108 00:30:28.531676 2635 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-dcea41702a" Nov 8 00:30:28.532064 kubelet[2635]: I1108 00:30:28.532037 2635 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-dcea41702a" Nov 8 00:30:28.532481 kubelet[2635]: I1108 00:30:28.532468 2635 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-dcea41702a" Nov 8 00:30:28.542361 kubelet[2635]: E1108 00:30:28.542275 2635 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-dcea41702a\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-dcea41702a" Nov 8 00:30:28.605082 kubelet[2635]: I1108 00:30:28.605011 2635 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-dcea41702a" Nov 8 00:30:28.609184 kubelet[2635]: I1108 00:30:28.608806 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f2746dbe9cd2a311acb13c57a2d1d5f4-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-dcea41702a\" (UID: \"f2746dbe9cd2a311acb13c57a2d1d5f4\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-dcea41702a" Nov 8 00:30:28.609184 kubelet[2635]: I1108 00:30:28.608882 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f2746dbe9cd2a311acb13c57a2d1d5f4-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-dcea41702a\" (UID: \"f2746dbe9cd2a311acb13c57a2d1d5f4\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-dcea41702a" Nov 8 00:30:28.609184 kubelet[2635]: I1108 00:30:28.608910 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f2746dbe9cd2a311acb13c57a2d1d5f4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-dcea41702a\" (UID: \"f2746dbe9cd2a311acb13c57a2d1d5f4\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-dcea41702a" Nov 8 00:30:28.609184 kubelet[2635]: I1108 00:30:28.608941 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f953143db92a0d7ff2d41f5d4af04b4-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-dcea41702a\" (UID: \"4f953143db92a0d7ff2d41f5d4af04b4\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-dcea41702a" Nov 8 00:30:28.609184 kubelet[2635]: I1108 00:30:28.608968 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f953143db92a0d7ff2d41f5d4af04b4-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-dcea41702a\" (UID: \"4f953143db92a0d7ff2d41f5d4af04b4\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-dcea41702a" Nov 8 00:30:28.610376 kubelet[2635]: I1108 00:30:28.608994 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f953143db92a0d7ff2d41f5d4af04b4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-dcea41702a\" (UID: \"4f953143db92a0d7ff2d41f5d4af04b4\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-dcea41702a" Nov 8 00:30:28.610376 kubelet[2635]: I1108 00:30:28.609022 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f953143db92a0d7ff2d41f5d4af04b4-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-dcea41702a\" (UID: \"4f953143db92a0d7ff2d41f5d4af04b4\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-dcea41702a" Nov 8 00:30:28.610376 kubelet[2635]: I1108 00:30:28.609043 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f953143db92a0d7ff2d41f5d4af04b4-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-dcea41702a\" (UID: \"4f953143db92a0d7ff2d41f5d4af04b4\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-dcea41702a" Nov 8 00:30:28.610376 kubelet[2635]: I1108 00:30:28.609064 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/17287cb36cb8f1d5c433143fe1b9ec18-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-dcea41702a\" (UID: \"17287cb36cb8f1d5c433143fe1b9ec18\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-dcea41702a" Nov 8 00:30:28.615749 kubelet[2635]: I1108 00:30:28.615713 2635 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-dcea41702a" Nov 8 00:30:28.615923 kubelet[2635]: I1108 00:30:28.615824 2635 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-dcea41702a" Nov 8 00:30:29.362188 kubelet[2635]: I1108 00:30:29.362036 2635 apiserver.go:52] "Watching apiserver" Nov 8 00:30:29.404601 kubelet[2635]: I1108 00:30:29.404547 2635 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:30:29.467801 kubelet[2635]: I1108 00:30:29.467756 2635 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-dcea41702a" Nov 8 00:30:29.485179 kubelet[2635]: E1108 00:30:29.485140 2635 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-dcea41702a\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-dcea41702a" Nov 8 00:30:29.570791 kubelet[2635]: I1108 00:30:29.569854 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-dcea41702a" podStartSLOduration=1.5698142430000002 podStartE2EDuration="1.569814243s" podCreationTimestamp="2025-11-08 00:30:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:30:29.549343112 +0000 UTC m=+1.287838985" watchObservedRunningTime="2025-11-08 00:30:29.569814243 +0000 UTC m=+1.308310117" Nov 8 00:30:29.601741 kubelet[2635]: I1108 00:30:29.601558 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-dcea41702a" podStartSLOduration=2.601535492 podStartE2EDuration="2.601535492s" podCreationTimestamp="2025-11-08 00:30:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:30:29.571337856 +0000 UTC m=+1.309833728" watchObservedRunningTime="2025-11-08 00:30:29.601535492 +0000 UTC m=+1.340031355" Nov 8 00:30:29.601741 kubelet[2635]: I1108 00:30:29.601671 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-dcea41702a" podStartSLOduration=1.601666818 podStartE2EDuration="1.601666818s" podCreationTimestamp="2025-11-08 00:30:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:30:29.601246668 +0000 UTC m=+1.339742541" watchObservedRunningTime="2025-11-08 00:30:29.601666818 +0000 UTC m=+1.340162691" Nov 8 00:30:32.610947 kubelet[2635]: I1108 00:30:32.610885 2635 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:30:32.611633 containerd[1513]: time="2025-11-08T00:30:32.611387804Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:30:32.612005 kubelet[2635]: I1108 00:30:32.611739 2635 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:30:33.645078 kubelet[2635]: I1108 00:30:33.644992 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3cff080c-c4c2-4785-b624-b47701c97a57-lib-modules\") pod \"kube-proxy-5lhzd\" (UID: \"3cff080c-c4c2-4785-b624-b47701c97a57\") " pod="kube-system/kube-proxy-5lhzd" Nov 8 00:30:33.645078 kubelet[2635]: I1108 00:30:33.645060 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdqbz\" (UniqueName: \"kubernetes.io/projected/3cff080c-c4c2-4785-b624-b47701c97a57-kube-api-access-cdqbz\") pod \"kube-proxy-5lhzd\" (UID: \"3cff080c-c4c2-4785-b624-b47701c97a57\") " pod="kube-system/kube-proxy-5lhzd" Nov 8 00:30:33.645709 kubelet[2635]: I1108 00:30:33.645094 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3cff080c-c4c2-4785-b624-b47701c97a57-kube-proxy\") pod \"kube-proxy-5lhzd\" (UID: \"3cff080c-c4c2-4785-b624-b47701c97a57\") " pod="kube-system/kube-proxy-5lhzd" Nov 8 00:30:33.645709 kubelet[2635]: I1108 00:30:33.645130 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3cff080c-c4c2-4785-b624-b47701c97a57-xtables-lock\") pod \"kube-proxy-5lhzd\" (UID: \"3cff080c-c4c2-4785-b624-b47701c97a57\") " pod="kube-system/kube-proxy-5lhzd" Nov 8 00:30:33.651044 systemd[1]: Created slice kubepods-besteffort-pod3cff080c_c4c2_4785_b624_b47701c97a57.slice - libcontainer container kubepods-besteffort-pod3cff080c_c4c2_4785_b624_b47701c97a57.slice. Nov 8 00:30:33.774202 systemd[1]: Created slice kubepods-besteffort-pod4cf072bf_9aa5_416d_9999_e4368160d55a.slice - libcontainer container kubepods-besteffort-pod4cf072bf_9aa5_416d_9999_e4368160d55a.slice. Nov 8 00:30:33.846931 kubelet[2635]: I1108 00:30:33.846788 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj697\" (UniqueName: \"kubernetes.io/projected/4cf072bf-9aa5-416d-9999-e4368160d55a-kube-api-access-xj697\") pod \"tigera-operator-7dcd859c48-8sktt\" (UID: \"4cf072bf-9aa5-416d-9999-e4368160d55a\") " pod="tigera-operator/tigera-operator-7dcd859c48-8sktt" Nov 8 00:30:33.846931 kubelet[2635]: I1108 00:30:33.846867 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4cf072bf-9aa5-416d-9999-e4368160d55a-var-lib-calico\") pod \"tigera-operator-7dcd859c48-8sktt\" (UID: \"4cf072bf-9aa5-416d-9999-e4368160d55a\") " pod="tigera-operator/tigera-operator-7dcd859c48-8sktt" Nov 8 00:30:33.967456 containerd[1513]: time="2025-11-08T00:30:33.966732201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5lhzd,Uid:3cff080c-c4c2-4785-b624-b47701c97a57,Namespace:kube-system,Attempt:0,}" Nov 8 00:30:34.020131 containerd[1513]: time="2025-11-08T00:30:34.019732762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:30:34.020131 containerd[1513]: time="2025-11-08T00:30:34.020083300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:30:34.020555 containerd[1513]: time="2025-11-08T00:30:34.020108417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:34.020555 containerd[1513]: time="2025-11-08T00:30:34.020305077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:34.056780 systemd[1]: Started cri-containerd-0971cf0c2553ee772d2057d3b2caaed67a8822fcf9f788666a7eced880fc46ad.scope - libcontainer container 0971cf0c2553ee772d2057d3b2caaed67a8822fcf9f788666a7eced880fc46ad. Nov 8 00:30:34.078237 containerd[1513]: time="2025-11-08T00:30:34.077913784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-8sktt,Uid:4cf072bf-9aa5-416d-9999-e4368160d55a,Namespace:tigera-operator,Attempt:0,}" Nov 8 00:30:34.109653 containerd[1513]: time="2025-11-08T00:30:34.109345828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5lhzd,Uid:3cff080c-c4c2-4785-b624-b47701c97a57,Namespace:kube-system,Attempt:0,} returns sandbox id \"0971cf0c2553ee772d2057d3b2caaed67a8822fcf9f788666a7eced880fc46ad\"" Nov 8 00:30:34.121857 containerd[1513]: time="2025-11-08T00:30:34.119836627Z" level=info msg="CreateContainer within sandbox \"0971cf0c2553ee772d2057d3b2caaed67a8822fcf9f788666a7eced880fc46ad\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:30:34.137225 containerd[1513]: time="2025-11-08T00:30:34.136954993Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:30:34.137225 containerd[1513]: time="2025-11-08T00:30:34.137035494Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:30:34.137225 containerd[1513]: time="2025-11-08T00:30:34.137055060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:34.137377 containerd[1513]: time="2025-11-08T00:30:34.137235890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:34.152991 containerd[1513]: time="2025-11-08T00:30:34.152794195Z" level=info msg="CreateContainer within sandbox \"0971cf0c2553ee772d2057d3b2caaed67a8822fcf9f788666a7eced880fc46ad\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"47c61f5bbb3cc38d4bbbfff9337b4381ef5c15b48126688c3de44e703cba927f\"" Nov 8 00:30:34.154921 containerd[1513]: time="2025-11-08T00:30:34.154893338Z" level=info msg="StartContainer for \"47c61f5bbb3cc38d4bbbfff9337b4381ef5c15b48126688c3de44e703cba927f\"" Nov 8 00:30:34.163605 systemd[1]: Started cri-containerd-7c9c0a364c9f7bc6bbf9ba5cc8983ad68db7d2f567802292607a0a24c14e3250.scope - libcontainer container 7c9c0a364c9f7bc6bbf9ba5cc8983ad68db7d2f567802292607a0a24c14e3250. Nov 8 00:30:34.200630 systemd[1]: Started cri-containerd-47c61f5bbb3cc38d4bbbfff9337b4381ef5c15b48126688c3de44e703cba927f.scope - libcontainer container 47c61f5bbb3cc38d4bbbfff9337b4381ef5c15b48126688c3de44e703cba927f. Nov 8 00:30:34.234973 containerd[1513]: time="2025-11-08T00:30:34.234671175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-8sktt,Uid:4cf072bf-9aa5-416d-9999-e4368160d55a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7c9c0a364c9f7bc6bbf9ba5cc8983ad68db7d2f567802292607a0a24c14e3250\"" Nov 8 00:30:34.237343 containerd[1513]: time="2025-11-08T00:30:34.237310952Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 8 00:30:34.266121 containerd[1513]: time="2025-11-08T00:30:34.266042053Z" level=info msg="StartContainer for \"47c61f5bbb3cc38d4bbbfff9337b4381ef5c15b48126688c3de44e703cba927f\" returns successfully" Nov 8 00:30:36.338726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4233695515.mount: Deactivated successfully. Nov 8 00:30:36.871889 containerd[1513]: time="2025-11-08T00:30:36.871753917Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:36.873983 containerd[1513]: time="2025-11-08T00:30:36.873901358Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 8 00:30:36.875047 containerd[1513]: time="2025-11-08T00:30:36.875025880Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:36.879445 containerd[1513]: time="2025-11-08T00:30:36.878894713Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:36.880754 containerd[1513]: time="2025-11-08T00:30:36.880731522Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.643387539s" Nov 8 00:30:36.880831 containerd[1513]: time="2025-11-08T00:30:36.880819066Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 8 00:30:36.888598 containerd[1513]: time="2025-11-08T00:30:36.888547757Z" level=info msg="CreateContainer within sandbox \"7c9c0a364c9f7bc6bbf9ba5cc8983ad68db7d2f567802292607a0a24c14e3250\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 8 00:30:36.908829 containerd[1513]: time="2025-11-08T00:30:36.908757992Z" level=info msg="CreateContainer within sandbox \"7c9c0a364c9f7bc6bbf9ba5cc8983ad68db7d2f567802292607a0a24c14e3250\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4a170432b0f97442a20012a9c31b3b795bd12636914e1e6ce432780a2c1c8321\"" Nov 8 00:30:36.909802 containerd[1513]: time="2025-11-08T00:30:36.909740115Z" level=info msg="StartContainer for \"4a170432b0f97442a20012a9c31b3b795bd12636914e1e6ce432780a2c1c8321\"" Nov 8 00:30:36.942550 systemd[1]: Started cri-containerd-4a170432b0f97442a20012a9c31b3b795bd12636914e1e6ce432780a2c1c8321.scope - libcontainer container 4a170432b0f97442a20012a9c31b3b795bd12636914e1e6ce432780a2c1c8321. Nov 8 00:30:36.971643 containerd[1513]: time="2025-11-08T00:30:36.971441589Z" level=info msg="StartContainer for \"4a170432b0f97442a20012a9c31b3b795bd12636914e1e6ce432780a2c1c8321\" returns successfully" Nov 8 00:30:37.515751 kubelet[2635]: I1108 00:30:37.514216 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5lhzd" podStartSLOduration=4.514181818 podStartE2EDuration="4.514181818s" podCreationTimestamp="2025-11-08 00:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:30:34.52840427 +0000 UTC m=+6.266900163" watchObservedRunningTime="2025-11-08 00:30:37.514181818 +0000 UTC m=+9.252677731" Nov 8 00:30:37.515751 kubelet[2635]: I1108 00:30:37.514480 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-8sktt" podStartSLOduration=1.867796802 podStartE2EDuration="4.514462745s" podCreationTimestamp="2025-11-08 00:30:33 +0000 UTC" firstStartedPulling="2025-11-08 00:30:34.236827594 +0000 UTC m=+5.975323468" lastFinishedPulling="2025-11-08 00:30:36.883493528 +0000 UTC m=+8.621989411" observedRunningTime="2025-11-08 00:30:37.513128549 +0000 UTC m=+9.251624453" watchObservedRunningTime="2025-11-08 00:30:37.514462745 +0000 UTC m=+9.252958688" Nov 8 00:30:43.420359 sudo[1766]: pam_unix(sudo:session): session closed for user root Nov 8 00:30:43.586477 sshd[1763]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:43.590324 systemd[1]: sshd@6-46.62.239.97:22-147.75.109.163:55446.service: Deactivated successfully. Nov 8 00:30:43.594025 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:30:43.594376 systemd[1]: session-7.scope: Consumed 5.687s CPU time, 147.3M memory peak, 0B memory swap peak. Nov 8 00:30:43.596355 systemd-logind[1477]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:30:43.599108 systemd-logind[1477]: Removed session 7. Nov 8 00:30:47.710639 systemd[1]: Created slice kubepods-besteffort-pod5482fa03_87fd_4110_b776_46bd47b2e6b8.slice - libcontainer container kubepods-besteffort-pod5482fa03_87fd_4110_b776_46bd47b2e6b8.slice. Nov 8 00:30:47.741844 kubelet[2635]: I1108 00:30:47.741766 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5482fa03-87fd-4110-b776-46bd47b2e6b8-tigera-ca-bundle\") pod \"calico-typha-86d799bcb5-pbqth\" (UID: \"5482fa03-87fd-4110-b776-46bd47b2e6b8\") " pod="calico-system/calico-typha-86d799bcb5-pbqth" Nov 8 00:30:47.741844 kubelet[2635]: I1108 00:30:47.741813 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z672d\" (UniqueName: \"kubernetes.io/projected/5482fa03-87fd-4110-b776-46bd47b2e6b8-kube-api-access-z672d\") pod \"calico-typha-86d799bcb5-pbqth\" (UID: \"5482fa03-87fd-4110-b776-46bd47b2e6b8\") " pod="calico-system/calico-typha-86d799bcb5-pbqth" Nov 8 00:30:47.741844 kubelet[2635]: I1108 00:30:47.741841 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5482fa03-87fd-4110-b776-46bd47b2e6b8-typha-certs\") pod \"calico-typha-86d799bcb5-pbqth\" (UID: \"5482fa03-87fd-4110-b776-46bd47b2e6b8\") " pod="calico-system/calico-typha-86d799bcb5-pbqth" Nov 8 00:30:47.831089 systemd[1]: Created slice kubepods-besteffort-podc9290bfc_abc1_4b17_a06e_63975f90e0fe.slice - libcontainer container kubepods-besteffort-podc9290bfc_abc1_4b17_a06e_63975f90e0fe.slice. Nov 8 00:30:47.843637 kubelet[2635]: I1108 00:30:47.842555 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c9290bfc-abc1-4b17-a06e-63975f90e0fe-node-certs\") pod \"calico-node-bnsqs\" (UID: \"c9290bfc-abc1-4b17-a06e-63975f90e0fe\") " pod="calico-system/calico-node-bnsqs" Nov 8 00:30:47.843637 kubelet[2635]: I1108 00:30:47.842663 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c9290bfc-abc1-4b17-a06e-63975f90e0fe-var-lib-calico\") pod \"calico-node-bnsqs\" (UID: \"c9290bfc-abc1-4b17-a06e-63975f90e0fe\") " pod="calico-system/calico-node-bnsqs" Nov 8 00:30:47.843637 kubelet[2635]: I1108 00:30:47.842731 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c9290bfc-abc1-4b17-a06e-63975f90e0fe-flexvol-driver-host\") pod \"calico-node-bnsqs\" (UID: \"c9290bfc-abc1-4b17-a06e-63975f90e0fe\") " pod="calico-system/calico-node-bnsqs" Nov 8 00:30:47.843637 kubelet[2635]: I1108 00:30:47.842775 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c9290bfc-abc1-4b17-a06e-63975f90e0fe-cni-bin-dir\") pod \"calico-node-bnsqs\" (UID: \"c9290bfc-abc1-4b17-a06e-63975f90e0fe\") " pod="calico-system/calico-node-bnsqs" Nov 8 00:30:47.843637 kubelet[2635]: I1108 00:30:47.842798 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c9290bfc-abc1-4b17-a06e-63975f90e0fe-lib-modules\") pod \"calico-node-bnsqs\" (UID: \"c9290bfc-abc1-4b17-a06e-63975f90e0fe\") " pod="calico-system/calico-node-bnsqs" Nov 8 00:30:47.845495 kubelet[2635]: I1108 00:30:47.842820 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c9290bfc-abc1-4b17-a06e-63975f90e0fe-cni-net-dir\") pod \"calico-node-bnsqs\" (UID: \"c9290bfc-abc1-4b17-a06e-63975f90e0fe\") " pod="calico-system/calico-node-bnsqs" Nov 8 00:30:47.845495 kubelet[2635]: I1108 00:30:47.842904 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c9290bfc-abc1-4b17-a06e-63975f90e0fe-policysync\") pod \"calico-node-bnsqs\" (UID: \"c9290bfc-abc1-4b17-a06e-63975f90e0fe\") " pod="calico-system/calico-node-bnsqs" Nov 8 00:30:47.845495 kubelet[2635]: I1108 00:30:47.842926 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9290bfc-abc1-4b17-a06e-63975f90e0fe-tigera-ca-bundle\") pod \"calico-node-bnsqs\" (UID: \"c9290bfc-abc1-4b17-a06e-63975f90e0fe\") " pod="calico-system/calico-node-bnsqs" Nov 8 00:30:47.845495 kubelet[2635]: I1108 00:30:47.842947 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c9290bfc-abc1-4b17-a06e-63975f90e0fe-xtables-lock\") pod \"calico-node-bnsqs\" (UID: \"c9290bfc-abc1-4b17-a06e-63975f90e0fe\") " pod="calico-system/calico-node-bnsqs" Nov 8 00:30:47.845495 kubelet[2635]: I1108 00:30:47.842971 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c9290bfc-abc1-4b17-a06e-63975f90e0fe-cni-log-dir\") pod \"calico-node-bnsqs\" (UID: \"c9290bfc-abc1-4b17-a06e-63975f90e0fe\") " pod="calico-system/calico-node-bnsqs" Nov 8 00:30:47.845614 kubelet[2635]: I1108 00:30:47.842992 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c9290bfc-abc1-4b17-a06e-63975f90e0fe-var-run-calico\") pod \"calico-node-bnsqs\" (UID: \"c9290bfc-abc1-4b17-a06e-63975f90e0fe\") " pod="calico-system/calico-node-bnsqs" Nov 8 00:30:47.845614 kubelet[2635]: I1108 00:30:47.843014 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khrgw\" (UniqueName: \"kubernetes.io/projected/c9290bfc-abc1-4b17-a06e-63975f90e0fe-kube-api-access-khrgw\") pod \"calico-node-bnsqs\" (UID: \"c9290bfc-abc1-4b17-a06e-63975f90e0fe\") " pod="calico-system/calico-node-bnsqs" Nov 8 00:30:47.954496 kubelet[2635]: E1108 00:30:47.954446 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.954496 kubelet[2635]: W1108 00:30:47.954491 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.954926 kubelet[2635]: E1108 00:30:47.954537 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.954926 kubelet[2635]: E1108 00:30:47.954869 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.954926 kubelet[2635]: W1108 00:30:47.954878 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.954926 kubelet[2635]: E1108 00:30:47.954887 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.957441 kubelet[2635]: E1108 00:30:47.955200 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.957441 kubelet[2635]: W1108 00:30:47.955232 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.957441 kubelet[2635]: E1108 00:30:47.955244 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.957864 kubelet[2635]: E1108 00:30:47.957846 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.957982 kubelet[2635]: W1108 00:30:47.957966 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.958091 kubelet[2635]: E1108 00:30:47.958074 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.959569 kubelet[2635]: E1108 00:30:47.959549 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.959689 kubelet[2635]: W1108 00:30:47.959670 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.961214 kubelet[2635]: E1108 00:30:47.959782 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.963555 kubelet[2635]: E1108 00:30:47.963536 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.963823 kubelet[2635]: W1108 00:30:47.963659 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.963823 kubelet[2635]: E1108 00:30:47.963683 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.965678 kubelet[2635]: E1108 00:30:47.965607 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.965678 kubelet[2635]: W1108 00:30:47.965625 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.965678 kubelet[2635]: E1108 00:30:47.965642 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.020379 containerd[1513]: time="2025-11-08T00:30:48.020081115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86d799bcb5-pbqth,Uid:5482fa03-87fd-4110-b776-46bd47b2e6b8,Namespace:calico-system,Attempt:0,}" Nov 8 00:30:48.028689 kubelet[2635]: E1108 00:30:48.028638 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m4hcx" podUID="7a730453-478d-46fd-915f-5cbf5e28b105" Nov 8 00:30:48.032204 kubelet[2635]: E1108 00:30:48.032140 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.032204 kubelet[2635]: W1108 00:30:48.032177 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.032204 kubelet[2635]: E1108 00:30:48.032202 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.035964 kubelet[2635]: E1108 00:30:48.035852 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.035964 kubelet[2635]: W1108 00:30:48.035867 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.035964 kubelet[2635]: E1108 00:30:48.035894 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.036518 kubelet[2635]: E1108 00:30:48.036439 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.036518 kubelet[2635]: W1108 00:30:48.036450 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.036518 kubelet[2635]: E1108 00:30:48.036459 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.037653 kubelet[2635]: E1108 00:30:48.036704 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.037653 kubelet[2635]: W1108 00:30:48.036714 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.037653 kubelet[2635]: E1108 00:30:48.036722 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.038915 kubelet[2635]: E1108 00:30:48.038394 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.038915 kubelet[2635]: W1108 00:30:48.038452 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.038915 kubelet[2635]: E1108 00:30:48.038481 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.039136 kubelet[2635]: E1108 00:30:48.039127 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.039190 kubelet[2635]: W1108 00:30:48.039182 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.039627 kubelet[2635]: E1108 00:30:48.039236 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.039760 kubelet[2635]: E1108 00:30:48.039750 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.039892 kubelet[2635]: W1108 00:30:48.039803 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.039892 kubelet[2635]: E1108 00:30:48.039814 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.040154 kubelet[2635]: E1108 00:30:48.040105 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.040221 kubelet[2635]: W1108 00:30:48.040208 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.040326 kubelet[2635]: E1108 00:30:48.040259 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.040475 kubelet[2635]: E1108 00:30:48.040466 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.040521 kubelet[2635]: W1108 00:30:48.040514 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.040609 kubelet[2635]: E1108 00:30:48.040559 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.040716 kubelet[2635]: E1108 00:30:48.040708 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.040810 kubelet[2635]: W1108 00:30:48.040755 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.040810 kubelet[2635]: E1108 00:30:48.040766 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.041059 kubelet[2635]: E1108 00:30:48.040997 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.041059 kubelet[2635]: W1108 00:30:48.041005 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.041059 kubelet[2635]: E1108 00:30:48.041013 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.041275 kubelet[2635]: E1108 00:30:48.041200 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.041275 kubelet[2635]: W1108 00:30:48.041210 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.041275 kubelet[2635]: E1108 00:30:48.041219 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.041505 kubelet[2635]: E1108 00:30:48.041439 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.041505 kubelet[2635]: W1108 00:30:48.041447 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.041505 kubelet[2635]: E1108 00:30:48.041455 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.041700 kubelet[2635]: E1108 00:30:48.041691 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.041812 kubelet[2635]: W1108 00:30:48.041750 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.041812 kubelet[2635]: E1108 00:30:48.041760 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.042054 kubelet[2635]: E1108 00:30:48.041976 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.042054 kubelet[2635]: W1108 00:30:48.041985 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.042054 kubelet[2635]: E1108 00:30:48.041994 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.042204 kubelet[2635]: E1108 00:30:48.042196 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.042339 kubelet[2635]: W1108 00:30:48.042243 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.042339 kubelet[2635]: E1108 00:30:48.042253 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.042520 kubelet[2635]: E1108 00:30:48.042511 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.042571 kubelet[2635]: W1108 00:30:48.042564 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.042653 kubelet[2635]: E1108 00:30:48.042602 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.042855 kubelet[2635]: E1108 00:30:48.042773 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.042855 kubelet[2635]: W1108 00:30:48.042783 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.042855 kubelet[2635]: E1108 00:30:48.042790 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.043022 kubelet[2635]: E1108 00:30:48.043013 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.043136 kubelet[2635]: W1108 00:30:48.043073 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.043136 kubelet[2635]: E1108 00:30:48.043085 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.043346 kubelet[2635]: E1108 00:30:48.043257 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.043346 kubelet[2635]: W1108 00:30:48.043265 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.043346 kubelet[2635]: E1108 00:30:48.043273 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.044635 kubelet[2635]: E1108 00:30:48.044626 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.044774 kubelet[2635]: W1108 00:30:48.044687 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.044774 kubelet[2635]: E1108 00:30:48.044698 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.044774 kubelet[2635]: I1108 00:30:48.044718 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7a730453-478d-46fd-915f-5cbf5e28b105-varrun\") pod \"csi-node-driver-m4hcx\" (UID: \"7a730453-478d-46fd-915f-5cbf5e28b105\") " pod="calico-system/csi-node-driver-m4hcx" Nov 8 00:30:48.045011 kubelet[2635]: E1108 00:30:48.044928 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.045011 kubelet[2635]: W1108 00:30:48.044938 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.045011 kubelet[2635]: E1108 00:30:48.044947 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.045011 kubelet[2635]: I1108 00:30:48.044960 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7a730453-478d-46fd-915f-5cbf5e28b105-registration-dir\") pod \"csi-node-driver-m4hcx\" (UID: \"7a730453-478d-46fd-915f-5cbf5e28b105\") " pod="calico-system/csi-node-driver-m4hcx" Nov 8 00:30:48.045224 kubelet[2635]: E1108 00:30:48.045211 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.045345 kubelet[2635]: W1108 00:30:48.045269 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.045345 kubelet[2635]: E1108 00:30:48.045280 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.045345 kubelet[2635]: I1108 00:30:48.045293 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7a730453-478d-46fd-915f-5cbf5e28b105-kubelet-dir\") pod \"csi-node-driver-m4hcx\" (UID: \"7a730453-478d-46fd-915f-5cbf5e28b105\") " pod="calico-system/csi-node-driver-m4hcx" Nov 8 00:30:48.045590 kubelet[2635]: E1108 00:30:48.045509 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.045590 kubelet[2635]: W1108 00:30:48.045517 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.045590 kubelet[2635]: E1108 00:30:48.045525 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.045590 kubelet[2635]: I1108 00:30:48.045537 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7a730453-478d-46fd-915f-5cbf5e28b105-socket-dir\") pod \"csi-node-driver-m4hcx\" (UID: \"7a730453-478d-46fd-915f-5cbf5e28b105\") " pod="calico-system/csi-node-driver-m4hcx" Nov 8 00:30:48.045875 kubelet[2635]: E1108 00:30:48.045749 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.045875 kubelet[2635]: W1108 00:30:48.045758 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.045875 kubelet[2635]: E1108 00:30:48.045765 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.045875 kubelet[2635]: I1108 00:30:48.045779 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m2rp\" (UniqueName: \"kubernetes.io/projected/7a730453-478d-46fd-915f-5cbf5e28b105-kube-api-access-4m2rp\") pod \"csi-node-driver-m4hcx\" (UID: \"7a730453-478d-46fd-915f-5cbf5e28b105\") " pod="calico-system/csi-node-driver-m4hcx" Nov 8 00:30:48.046049 kubelet[2635]: E1108 00:30:48.046040 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.046096 kubelet[2635]: W1108 00:30:48.046089 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.046138 kubelet[2635]: E1108 00:30:48.046131 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.046372 kubelet[2635]: E1108 00:30:48.046260 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.046372 kubelet[2635]: W1108 00:30:48.046268 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.046372 kubelet[2635]: E1108 00:30:48.046275 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.046551 kubelet[2635]: E1108 00:30:48.046544 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.046599 kubelet[2635]: W1108 00:30:48.046593 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.046690 kubelet[2635]: E1108 00:30:48.046630 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.046839 kubelet[2635]: E1108 00:30:48.046819 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.046919 kubelet[2635]: W1108 00:30:48.046900 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.046919 kubelet[2635]: E1108 00:30:48.046911 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.047210 kubelet[2635]: E1108 00:30:48.047134 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.047210 kubelet[2635]: W1108 00:30:48.047142 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.047210 kubelet[2635]: E1108 00:30:48.047150 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.047449 kubelet[2635]: E1108 00:30:48.047357 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.047449 kubelet[2635]: W1108 00:30:48.047364 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.047449 kubelet[2635]: E1108 00:30:48.047373 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.047907 kubelet[2635]: E1108 00:30:48.047816 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.047907 kubelet[2635]: W1108 00:30:48.047850 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.047907 kubelet[2635]: E1108 00:30:48.047858 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.048121 kubelet[2635]: E1108 00:30:48.048063 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.048121 kubelet[2635]: W1108 00:30:48.048071 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.048121 kubelet[2635]: E1108 00:30:48.048078 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.048365 kubelet[2635]: E1108 00:30:48.048311 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.048365 kubelet[2635]: W1108 00:30:48.048318 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.048365 kubelet[2635]: E1108 00:30:48.048326 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.048605 kubelet[2635]: E1108 00:30:48.048576 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.048605 kubelet[2635]: W1108 00:30:48.048584 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.048605 kubelet[2635]: E1108 00:30:48.048591 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.078475 containerd[1513]: time="2025-11-08T00:30:48.078234004Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:30:48.078475 containerd[1513]: time="2025-11-08T00:30:48.078450531Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:30:48.079082 containerd[1513]: time="2025-11-08T00:30:48.078472632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:48.079082 containerd[1513]: time="2025-11-08T00:30:48.078556970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:48.138113 systemd[1]: Started cri-containerd-b5aa79feb373cd4aa2a6fe1e095cb51968ed763bfda1bc3f4da5b6465831b86d.scope - libcontainer container b5aa79feb373cd4aa2a6fe1e095cb51968ed763bfda1bc3f4da5b6465831b86d. Nov 8 00:30:48.139698 containerd[1513]: time="2025-11-08T00:30:48.137691602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bnsqs,Uid:c9290bfc-abc1-4b17-a06e-63975f90e0fe,Namespace:calico-system,Attempt:0,}" Nov 8 00:30:48.146584 kubelet[2635]: E1108 00:30:48.146435 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.146584 kubelet[2635]: W1108 00:30:48.146484 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.146584 kubelet[2635]: E1108 00:30:48.146505 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.149070 kubelet[2635]: E1108 00:30:48.147543 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.149070 kubelet[2635]: W1108 00:30:48.147556 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.149070 kubelet[2635]: E1108 00:30:48.147572 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.149070 kubelet[2635]: E1108 00:30:48.148953 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.149070 kubelet[2635]: W1108 00:30:48.148962 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.149070 kubelet[2635]: E1108 00:30:48.148971 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.149291 kubelet[2635]: E1108 00:30:48.149282 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.149366 kubelet[2635]: W1108 00:30:48.149355 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.149960 kubelet[2635]: E1108 00:30:48.149794 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.150820 kubelet[2635]: E1108 00:30:48.150505 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.150820 kubelet[2635]: W1108 00:30:48.150514 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.150820 kubelet[2635]: E1108 00:30:48.150523 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.151956 kubelet[2635]: E1108 00:30:48.151806 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.151956 kubelet[2635]: W1108 00:30:48.151819 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.151956 kubelet[2635]: E1108 00:30:48.151855 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.152518 kubelet[2635]: E1108 00:30:48.152484 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.152518 kubelet[2635]: W1108 00:30:48.152493 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.152518 kubelet[2635]: E1108 00:30:48.152501 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.153300 kubelet[2635]: E1108 00:30:48.152998 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.153300 kubelet[2635]: W1108 00:30:48.153007 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.153300 kubelet[2635]: E1108 00:30:48.153016 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.154047 kubelet[2635]: E1108 00:30:48.154037 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.154142 kubelet[2635]: W1108 00:30:48.154096 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.154142 kubelet[2635]: E1108 00:30:48.154108 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.154698 kubelet[2635]: E1108 00:30:48.154631 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.154698 kubelet[2635]: W1108 00:30:48.154643 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.154698 kubelet[2635]: E1108 00:30:48.154652 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.156152 kubelet[2635]: E1108 00:30:48.156142 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.156243 kubelet[2635]: W1108 00:30:48.156197 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.156360 kubelet[2635]: E1108 00:30:48.156349 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.157319 kubelet[2635]: E1108 00:30:48.157310 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.157531 kubelet[2635]: W1108 00:30:48.157519 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.157582 kubelet[2635]: E1108 00:30:48.157575 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.158391 kubelet[2635]: E1108 00:30:48.158365 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.158391 kubelet[2635]: W1108 00:30:48.158374 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.158391 kubelet[2635]: E1108 00:30:48.158383 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.158782 kubelet[2635]: E1108 00:30:48.158717 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.158782 kubelet[2635]: W1108 00:30:48.158726 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.158782 kubelet[2635]: E1108 00:30:48.158734 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.159130 kubelet[2635]: E1108 00:30:48.159052 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.159130 kubelet[2635]: W1108 00:30:48.159060 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.159130 kubelet[2635]: E1108 00:30:48.159068 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.160947 kubelet[2635]: E1108 00:30:48.159388 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.160947 kubelet[2635]: W1108 00:30:48.159405 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.160947 kubelet[2635]: E1108 00:30:48.159429 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.162630 kubelet[2635]: E1108 00:30:48.162491 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.162630 kubelet[2635]: W1108 00:30:48.162500 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.162630 kubelet[2635]: E1108 00:30:48.162510 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.162777 kubelet[2635]: E1108 00:30:48.162769 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.162837 kubelet[2635]: W1108 00:30:48.162817 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.162894 kubelet[2635]: E1108 00:30:48.162877 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.163087 kubelet[2635]: E1108 00:30:48.163080 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.163267 kubelet[2635]: W1108 00:30:48.163129 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.163267 kubelet[2635]: E1108 00:30:48.163156 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.163699 kubelet[2635]: E1108 00:30:48.163536 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.163699 kubelet[2635]: W1108 00:30:48.163546 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.163699 kubelet[2635]: E1108 00:30:48.163554 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.164268 kubelet[2635]: E1108 00:30:48.164216 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.164508 kubelet[2635]: W1108 00:30:48.164495 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.164571 kubelet[2635]: E1108 00:30:48.164560 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.165589 kubelet[2635]: E1108 00:30:48.165580 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.165639 kubelet[2635]: W1108 00:30:48.165632 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.165687 kubelet[2635]: E1108 00:30:48.165680 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.166123 kubelet[2635]: E1108 00:30:48.166089 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.168931 kubelet[2635]: W1108 00:30:48.168445 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.168931 kubelet[2635]: E1108 00:30:48.168466 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.168931 kubelet[2635]: E1108 00:30:48.168704 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.168931 kubelet[2635]: W1108 00:30:48.168712 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.168931 kubelet[2635]: E1108 00:30:48.168723 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.169337 kubelet[2635]: E1108 00:30:48.169086 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.169337 kubelet[2635]: W1108 00:30:48.169095 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.169337 kubelet[2635]: E1108 00:30:48.169103 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.183033 kubelet[2635]: E1108 00:30:48.183007 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.183181 kubelet[2635]: W1108 00:30:48.183170 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.183243 kubelet[2635]: E1108 00:30:48.183234 2635 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.187400 containerd[1513]: time="2025-11-08T00:30:48.187150069Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:30:48.187400 containerd[1513]: time="2025-11-08T00:30:48.187237463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:30:48.187400 containerd[1513]: time="2025-11-08T00:30:48.187253102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:48.188184 containerd[1513]: time="2025-11-08T00:30:48.188101074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:48.210298 systemd[1]: Started cri-containerd-beaeac10c3214d1473c3efe47735eedee9c7e2c818f08131cdbd10002037185b.scope - libcontainer container beaeac10c3214d1473c3efe47735eedee9c7e2c818f08131cdbd10002037185b. Nov 8 00:30:48.240317 containerd[1513]: time="2025-11-08T00:30:48.239010232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bnsqs,Uid:c9290bfc-abc1-4b17-a06e-63975f90e0fe,Namespace:calico-system,Attempt:0,} returns sandbox id \"beaeac10c3214d1473c3efe47735eedee9c7e2c818f08131cdbd10002037185b\"" Nov 8 00:30:48.243853 containerd[1513]: time="2025-11-08T00:30:48.242787930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 00:30:48.265368 containerd[1513]: time="2025-11-08T00:30:48.265337249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86d799bcb5-pbqth,Uid:5482fa03-87fd-4110-b776-46bd47b2e6b8,Namespace:calico-system,Attempt:0,} returns sandbox id \"b5aa79feb373cd4aa2a6fe1e095cb51968ed763bfda1bc3f4da5b6465831b86d\"" Nov 8 00:30:49.431522 kubelet[2635]: E1108 00:30:49.431348 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m4hcx" podUID="7a730453-478d-46fd-915f-5cbf5e28b105" Nov 8 00:30:50.301336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount678258046.mount: Deactivated successfully. Nov 8 00:30:50.406579 containerd[1513]: time="2025-11-08T00:30:50.406477506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Nov 8 00:30:50.419516 containerd[1513]: time="2025-11-08T00:30:50.419495498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:50.421740 containerd[1513]: time="2025-11-08T00:30:50.421688089Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:50.423045 containerd[1513]: time="2025-11-08T00:30:50.422929809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:50.423744 containerd[1513]: time="2025-11-08T00:30:50.423697651Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.180884714s" Nov 8 00:30:50.423744 containerd[1513]: time="2025-11-08T00:30:50.423737034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 8 00:30:50.425977 containerd[1513]: time="2025-11-08T00:30:50.425883880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 8 00:30:50.430162 containerd[1513]: time="2025-11-08T00:30:50.430095882Z" level=info msg="CreateContainer within sandbox \"beaeac10c3214d1473c3efe47735eedee9c7e2c818f08131cdbd10002037185b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 00:30:50.452766 containerd[1513]: time="2025-11-08T00:30:50.452684861Z" level=info msg="CreateContainer within sandbox \"beaeac10c3214d1473c3efe47735eedee9c7e2c818f08131cdbd10002037185b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"900663d489fb76f002bddb74b9b4cc073816bc39afa45b00bec8abc7391fda53\"" Nov 8 00:30:50.454950 containerd[1513]: time="2025-11-08T00:30:50.453478341Z" level=info msg="StartContainer for \"900663d489fb76f002bddb74b9b4cc073816bc39afa45b00bec8abc7391fda53\"" Nov 8 00:30:50.496596 systemd[1]: Started cri-containerd-900663d489fb76f002bddb74b9b4cc073816bc39afa45b00bec8abc7391fda53.scope - libcontainer container 900663d489fb76f002bddb74b9b4cc073816bc39afa45b00bec8abc7391fda53. Nov 8 00:30:50.537770 containerd[1513]: time="2025-11-08T00:30:50.537691700Z" level=info msg="StartContainer for \"900663d489fb76f002bddb74b9b4cc073816bc39afa45b00bec8abc7391fda53\" returns successfully" Nov 8 00:30:50.557012 systemd[1]: cri-containerd-900663d489fb76f002bddb74b9b4cc073816bc39afa45b00bec8abc7391fda53.scope: Deactivated successfully. Nov 8 00:30:50.726541 containerd[1513]: time="2025-11-08T00:30:50.665676884Z" level=info msg="shim disconnected" id=900663d489fb76f002bddb74b9b4cc073816bc39afa45b00bec8abc7391fda53 namespace=k8s.io Nov 8 00:30:50.726541 containerd[1513]: time="2025-11-08T00:30:50.726530317Z" level=warning msg="cleaning up after shim disconnected" id=900663d489fb76f002bddb74b9b4cc073816bc39afa45b00bec8abc7391fda53 namespace=k8s.io Nov 8 00:30:50.726868 containerd[1513]: time="2025-11-08T00:30:50.726561145Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:30:51.240867 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-900663d489fb76f002bddb74b9b4cc073816bc39afa45b00bec8abc7391fda53-rootfs.mount: Deactivated successfully. Nov 8 00:30:51.431374 kubelet[2635]: E1108 00:30:51.431248 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m4hcx" podUID="7a730453-478d-46fd-915f-5cbf5e28b105" Nov 8 00:30:52.703520 containerd[1513]: time="2025-11-08T00:30:52.703465271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:52.704885 containerd[1513]: time="2025-11-08T00:30:52.704719837Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33739890" Nov 8 00:30:52.706990 containerd[1513]: time="2025-11-08T00:30:52.706008194Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:52.708455 containerd[1513]: time="2025-11-08T00:30:52.708348783Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:52.709596 containerd[1513]: time="2025-11-08T00:30:52.708977918Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.283059885s" Nov 8 00:30:52.709596 containerd[1513]: time="2025-11-08T00:30:52.709002614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 8 00:30:52.710088 containerd[1513]: time="2025-11-08T00:30:52.709899487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 00:30:52.725009 containerd[1513]: time="2025-11-08T00:30:52.724920287Z" level=info msg="CreateContainer within sandbox \"b5aa79feb373cd4aa2a6fe1e095cb51968ed763bfda1bc3f4da5b6465831b86d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 8 00:30:52.743510 containerd[1513]: time="2025-11-08T00:30:52.743470068Z" level=info msg="CreateContainer within sandbox \"b5aa79feb373cd4aa2a6fe1e095cb51968ed763bfda1bc3f4da5b6465831b86d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6a7df834dad21fd5732692995f35e60f3cef609b6a34f96c98cd2c95e65d4f7e\"" Nov 8 00:30:52.744172 containerd[1513]: time="2025-11-08T00:30:52.743979422Z" level=info msg="StartContainer for \"6a7df834dad21fd5732692995f35e60f3cef609b6a34f96c98cd2c95e65d4f7e\"" Nov 8 00:30:52.782594 systemd[1]: Started cri-containerd-6a7df834dad21fd5732692995f35e60f3cef609b6a34f96c98cd2c95e65d4f7e.scope - libcontainer container 6a7df834dad21fd5732692995f35e60f3cef609b6a34f96c98cd2c95e65d4f7e. Nov 8 00:30:52.841668 containerd[1513]: time="2025-11-08T00:30:52.841493089Z" level=info msg="StartContainer for \"6a7df834dad21fd5732692995f35e60f3cef609b6a34f96c98cd2c95e65d4f7e\" returns successfully" Nov 8 00:30:53.430832 kubelet[2635]: E1108 00:30:53.430772 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m4hcx" podUID="7a730453-478d-46fd-915f-5cbf5e28b105" Nov 8 00:30:53.613821 kubelet[2635]: I1108 00:30:53.609312 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-86d799bcb5-pbqth" podStartSLOduration=2.164760373 podStartE2EDuration="6.607545119s" podCreationTimestamp="2025-11-08 00:30:47 +0000 UTC" firstStartedPulling="2025-11-08 00:30:48.266997765 +0000 UTC m=+20.005493637" lastFinishedPulling="2025-11-08 00:30:52.709782509 +0000 UTC m=+24.448278383" observedRunningTime="2025-11-08 00:30:53.605209198 +0000 UTC m=+25.343705122" watchObservedRunningTime="2025-11-08 00:30:53.607545119 +0000 UTC m=+25.346041043" Nov 8 00:30:54.580585 kubelet[2635]: I1108 00:30:54.580517 2635 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:30:55.430700 kubelet[2635]: E1108 00:30:55.430627 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m4hcx" podUID="7a730453-478d-46fd-915f-5cbf5e28b105" Nov 8 00:30:55.434455 containerd[1513]: time="2025-11-08T00:30:55.434391626Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:55.435872 containerd[1513]: time="2025-11-08T00:30:55.435722254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 8 00:30:55.436932 containerd[1513]: time="2025-11-08T00:30:55.436886474Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:55.439112 containerd[1513]: time="2025-11-08T00:30:55.439092968Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:55.439933 containerd[1513]: time="2025-11-08T00:30:55.439799699Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.729875457s" Nov 8 00:30:55.439933 containerd[1513]: time="2025-11-08T00:30:55.439823463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 8 00:30:55.445761 containerd[1513]: time="2025-11-08T00:30:55.444605283Z" level=info msg="CreateContainer within sandbox \"beaeac10c3214d1473c3efe47735eedee9c7e2c818f08131cdbd10002037185b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 00:30:55.467239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3300151102.mount: Deactivated successfully. Nov 8 00:30:55.472983 containerd[1513]: time="2025-11-08T00:30:55.472917679Z" level=info msg="CreateContainer within sandbox \"beaeac10c3214d1473c3efe47735eedee9c7e2c818f08131cdbd10002037185b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c519aff2cf941c36f98e6ce2537327da91d5eceaf714f24b76b711c13153a8bf\"" Nov 8 00:30:55.475632 containerd[1513]: time="2025-11-08T00:30:55.473666408Z" level=info msg="StartContainer for \"c519aff2cf941c36f98e6ce2537327da91d5eceaf714f24b76b711c13153a8bf\"" Nov 8 00:30:55.540548 systemd[1]: Started cri-containerd-c519aff2cf941c36f98e6ce2537327da91d5eceaf714f24b76b711c13153a8bf.scope - libcontainer container c519aff2cf941c36f98e6ce2537327da91d5eceaf714f24b76b711c13153a8bf. Nov 8 00:30:55.579769 containerd[1513]: time="2025-11-08T00:30:55.579716913Z" level=info msg="StartContainer for \"c519aff2cf941c36f98e6ce2537327da91d5eceaf714f24b76b711c13153a8bf\" returns successfully" Nov 8 00:30:56.172650 systemd[1]: cri-containerd-c519aff2cf941c36f98e6ce2537327da91d5eceaf714f24b76b711c13153a8bf.scope: Deactivated successfully. Nov 8 00:30:56.223086 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c519aff2cf941c36f98e6ce2537327da91d5eceaf714f24b76b711c13153a8bf-rootfs.mount: Deactivated successfully. Nov 8 00:30:56.257404 kubelet[2635]: I1108 00:30:56.257359 2635 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 8 00:30:56.270750 containerd[1513]: time="2025-11-08T00:30:56.270190067Z" level=info msg="shim disconnected" id=c519aff2cf941c36f98e6ce2537327da91d5eceaf714f24b76b711c13153a8bf namespace=k8s.io Nov 8 00:30:56.270750 containerd[1513]: time="2025-11-08T00:30:56.270304769Z" level=warning msg="cleaning up after shim disconnected" id=c519aff2cf941c36f98e6ce2537327da91d5eceaf714f24b76b711c13153a8bf namespace=k8s.io Nov 8 00:30:56.270750 containerd[1513]: time="2025-11-08T00:30:56.270318105Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:30:56.298277 containerd[1513]: time="2025-11-08T00:30:56.297103241Z" level=warning msg="cleanup warnings time=\"2025-11-08T00:30:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 8 00:30:56.329243 systemd[1]: Created slice kubepods-burstable-pod0f765679_4f0d_4ea9_957a_c1950533f8b3.slice - libcontainer container kubepods-burstable-pod0f765679_4f0d_4ea9_957a_c1950533f8b3.slice. Nov 8 00:30:56.351442 systemd[1]: Created slice kubepods-besteffort-poda375dcd8_3fbe_482e_815c_13c9165d26d2.slice - libcontainer container kubepods-besteffort-poda375dcd8_3fbe_482e_815c_13c9165d26d2.slice. Nov 8 00:30:56.364709 systemd[1]: Created slice kubepods-besteffort-pod6654d39b_215e_4d8c_8e26_48e297d85f8d.slice - libcontainer container kubepods-besteffort-pod6654d39b_215e_4d8c_8e26_48e297d85f8d.slice. Nov 8 00:30:56.373515 systemd[1]: Created slice kubepods-burstable-pod0bf2f9f4_97c3_4ca2_a71b_3b614c8ed2e5.slice - libcontainer container kubepods-burstable-pod0bf2f9f4_97c3_4ca2_a71b_3b614c8ed2e5.slice. Nov 8 00:30:56.381888 systemd[1]: Created slice kubepods-besteffort-pod7bfc75bc_86f9_445a_9d64_08b33fc703e4.slice - libcontainer container kubepods-besteffort-pod7bfc75bc_86f9_445a_9d64_08b33fc703e4.slice. Nov 8 00:30:56.389755 systemd[1]: Created slice kubepods-besteffort-pod562850b7_c26f_461d_bfe8_c8a199e1bb8c.slice - libcontainer container kubepods-besteffort-pod562850b7_c26f_461d_bfe8_c8a199e1bb8c.slice. Nov 8 00:30:56.392832 systemd[1]: Created slice kubepods-besteffort-pod8b84b572_8061_4813_b462_3eea6f974bdd.slice - libcontainer container kubepods-besteffort-pod8b84b572_8061_4813_b462_3eea6f974bdd.slice. Nov 8 00:30:56.427397 kubelet[2635]: I1108 00:30:56.426681 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b84b572-8061-4813-b462-3eea6f974bdd-config\") pod \"goldmane-666569f655-p4b9r\" (UID: \"8b84b572-8061-4813-b462-3eea6f974bdd\") " pod="calico-system/goldmane-666569f655-p4b9r" Nov 8 00:30:56.427713 kubelet[2635]: I1108 00:30:56.427674 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n49w7\" (UniqueName: \"kubernetes.io/projected/7bfc75bc-86f9-445a-9d64-08b33fc703e4-kube-api-access-n49w7\") pod \"calico-kube-controllers-5b858bf6c9-q5c9k\" (UID: \"7bfc75bc-86f9-445a-9d64-08b33fc703e4\") " pod="calico-system/calico-kube-controllers-5b858bf6c9-q5c9k" Nov 8 00:30:56.427868 kubelet[2635]: I1108 00:30:56.427812 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/562850b7-c26f-461d-bfe8-c8a199e1bb8c-calico-apiserver-certs\") pod \"calico-apiserver-64c9474b6f-brv5c\" (UID: \"562850b7-c26f-461d-bfe8-c8a199e1bb8c\") " pod="calico-apiserver/calico-apiserver-64c9474b6f-brv5c" Nov 8 00:30:56.428006 kubelet[2635]: I1108 00:30:56.427996 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8b84b572-8061-4813-b462-3eea6f974bdd-goldmane-key-pair\") pod \"goldmane-666569f655-p4b9r\" (UID: \"8b84b572-8061-4813-b462-3eea6f974bdd\") " pod="calico-system/goldmane-666569f655-p4b9r" Nov 8 00:30:56.428166 kubelet[2635]: I1108 00:30:56.428154 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-595cr\" (UniqueName: \"kubernetes.io/projected/8b84b572-8061-4813-b462-3eea6f974bdd-kube-api-access-595cr\") pod \"goldmane-666569f655-p4b9r\" (UID: \"8b84b572-8061-4813-b462-3eea6f974bdd\") " pod="calico-system/goldmane-666569f655-p4b9r" Nov 8 00:30:56.428300 kubelet[2635]: I1108 00:30:56.428237 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49q75\" (UniqueName: \"kubernetes.io/projected/a375dcd8-3fbe-482e-815c-13c9165d26d2-kube-api-access-49q75\") pod \"calico-apiserver-64c9474b6f-jbmnh\" (UID: \"a375dcd8-3fbe-482e-815c-13c9165d26d2\") " pod="calico-apiserver/calico-apiserver-64c9474b6f-jbmnh" Nov 8 00:30:56.428544 kubelet[2635]: I1108 00:30:56.428368 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bf2f9f4-97c3-4ca2-a71b-3b614c8ed2e5-config-volume\") pod \"coredns-674b8bbfcf-lf6dp\" (UID: \"0bf2f9f4-97c3-4ca2-a71b-3b614c8ed2e5\") " pod="kube-system/coredns-674b8bbfcf-lf6dp" Nov 8 00:30:56.428974 kubelet[2635]: I1108 00:30:56.428646 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a375dcd8-3fbe-482e-815c-13c9165d26d2-calico-apiserver-certs\") pod \"calico-apiserver-64c9474b6f-jbmnh\" (UID: \"a375dcd8-3fbe-482e-815c-13c9165d26d2\") " pod="calico-apiserver/calico-apiserver-64c9474b6f-jbmnh" Nov 8 00:30:56.429102 kubelet[2635]: I1108 00:30:56.429038 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f765679-4f0d-4ea9-957a-c1950533f8b3-config-volume\") pod \"coredns-674b8bbfcf-dcdpj\" (UID: \"0f765679-4f0d-4ea9-957a-c1950533f8b3\") " pod="kube-system/coredns-674b8bbfcf-dcdpj" Nov 8 00:30:56.429102 kubelet[2635]: I1108 00:30:56.429061 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5whvc\" (UniqueName: \"kubernetes.io/projected/0f765679-4f0d-4ea9-957a-c1950533f8b3-kube-api-access-5whvc\") pod \"coredns-674b8bbfcf-dcdpj\" (UID: \"0f765679-4f0d-4ea9-957a-c1950533f8b3\") " pod="kube-system/coredns-674b8bbfcf-dcdpj" Nov 8 00:30:56.429102 kubelet[2635]: I1108 00:30:56.429076 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bfc75bc-86f9-445a-9d64-08b33fc703e4-tigera-ca-bundle\") pod \"calico-kube-controllers-5b858bf6c9-q5c9k\" (UID: \"7bfc75bc-86f9-445a-9d64-08b33fc703e4\") " pod="calico-system/calico-kube-controllers-5b858bf6c9-q5c9k" Nov 8 00:30:56.430075 kubelet[2635]: I1108 00:30:56.429241 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6654d39b-215e-4d8c-8e26-48e297d85f8d-whisker-backend-key-pair\") pod \"whisker-97cb9cc46-6qg4s\" (UID: \"6654d39b-215e-4d8c-8e26-48e297d85f8d\") " pod="calico-system/whisker-97cb9cc46-6qg4s" Nov 8 00:30:56.430075 kubelet[2635]: I1108 00:30:56.429291 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b84b572-8061-4813-b462-3eea6f974bdd-goldmane-ca-bundle\") pod \"goldmane-666569f655-p4b9r\" (UID: \"8b84b572-8061-4813-b462-3eea6f974bdd\") " pod="calico-system/goldmane-666569f655-p4b9r" Nov 8 00:30:56.430075 kubelet[2635]: I1108 00:30:56.429324 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6654d39b-215e-4d8c-8e26-48e297d85f8d-whisker-ca-bundle\") pod \"whisker-97cb9cc46-6qg4s\" (UID: \"6654d39b-215e-4d8c-8e26-48e297d85f8d\") " pod="calico-system/whisker-97cb9cc46-6qg4s" Nov 8 00:30:56.430075 kubelet[2635]: I1108 00:30:56.429364 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvsmw\" (UniqueName: \"kubernetes.io/projected/6654d39b-215e-4d8c-8e26-48e297d85f8d-kube-api-access-vvsmw\") pod \"whisker-97cb9cc46-6qg4s\" (UID: \"6654d39b-215e-4d8c-8e26-48e297d85f8d\") " pod="calico-system/whisker-97cb9cc46-6qg4s" Nov 8 00:30:56.430075 kubelet[2635]: I1108 00:30:56.429431 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlzmd\" (UniqueName: \"kubernetes.io/projected/562850b7-c26f-461d-bfe8-c8a199e1bb8c-kube-api-access-hlzmd\") pod \"calico-apiserver-64c9474b6f-brv5c\" (UID: \"562850b7-c26f-461d-bfe8-c8a199e1bb8c\") " pod="calico-apiserver/calico-apiserver-64c9474b6f-brv5c" Nov 8 00:30:56.430246 kubelet[2635]: I1108 00:30:56.429452 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nb85\" (UniqueName: \"kubernetes.io/projected/0bf2f9f4-97c3-4ca2-a71b-3b614c8ed2e5-kube-api-access-4nb85\") pod \"coredns-674b8bbfcf-lf6dp\" (UID: \"0bf2f9f4-97c3-4ca2-a71b-3b614c8ed2e5\") " pod="kube-system/coredns-674b8bbfcf-lf6dp" Nov 8 00:30:56.605758 containerd[1513]: time="2025-11-08T00:30:56.605641600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 00:30:56.633823 containerd[1513]: time="2025-11-08T00:30:56.633751574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dcdpj,Uid:0f765679-4f0d-4ea9-957a-c1950533f8b3,Namespace:kube-system,Attempt:0,}" Nov 8 00:30:56.664262 containerd[1513]: time="2025-11-08T00:30:56.664066743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64c9474b6f-jbmnh,Uid:a375dcd8-3fbe-482e-815c-13c9165d26d2,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:30:56.676244 containerd[1513]: time="2025-11-08T00:30:56.676135348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-97cb9cc46-6qg4s,Uid:6654d39b-215e-4d8c-8e26-48e297d85f8d,Namespace:calico-system,Attempt:0,}" Nov 8 00:30:56.678383 containerd[1513]: time="2025-11-08T00:30:56.678278656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lf6dp,Uid:0bf2f9f4-97c3-4ca2-a71b-3b614c8ed2e5,Namespace:kube-system,Attempt:0,}" Nov 8 00:30:56.685362 containerd[1513]: time="2025-11-08T00:30:56.685294816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b858bf6c9-q5c9k,Uid:7bfc75bc-86f9-445a-9d64-08b33fc703e4,Namespace:calico-system,Attempt:0,}" Nov 8 00:30:56.705113 containerd[1513]: time="2025-11-08T00:30:56.705004362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-p4b9r,Uid:8b84b572-8061-4813-b462-3eea6f974bdd,Namespace:calico-system,Attempt:0,}" Nov 8 00:30:56.705483 containerd[1513]: time="2025-11-08T00:30:56.705449638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64c9474b6f-brv5c,Uid:562850b7-c26f-461d-bfe8-c8a199e1bb8c,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:30:57.068326 containerd[1513]: time="2025-11-08T00:30:57.068157208Z" level=error msg="Failed to destroy network for sandbox \"27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.068711 containerd[1513]: time="2025-11-08T00:30:57.068663757Z" level=error msg="Failed to destroy network for sandbox \"df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.069119 containerd[1513]: time="2025-11-08T00:30:57.068845425Z" level=error msg="Failed to destroy network for sandbox \"26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.075680 containerd[1513]: time="2025-11-08T00:30:57.075539271Z" level=error msg="encountered an error cleaning up failed sandbox \"26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.077099 containerd[1513]: time="2025-11-08T00:30:57.076015775Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64c9474b6f-jbmnh,Uid:a375dcd8-3fbe-482e-815c-13c9165d26d2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.079777 containerd[1513]: time="2025-11-08T00:30:57.077524425Z" level=error msg="encountered an error cleaning up failed sandbox \"df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.079777 containerd[1513]: time="2025-11-08T00:30:57.077612279Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-97cb9cc46-6qg4s,Uid:6654d39b-215e-4d8c-8e26-48e297d85f8d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.082353 containerd[1513]: time="2025-11-08T00:30:57.082307955Z" level=error msg="encountered an error cleaning up failed sandbox \"27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.083878 kubelet[2635]: E1108 00:30:57.083839 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.084149 kubelet[2635]: E1108 00:30:57.084125 2635 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64c9474b6f-jbmnh" Nov 8 00:30:57.084241 containerd[1513]: time="2025-11-08T00:30:57.084026775Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b858bf6c9-q5c9k,Uid:7bfc75bc-86f9-445a-9d64-08b33fc703e4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.084475 kubelet[2635]: E1108 00:30:57.084446 2635 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64c9474b6f-jbmnh" Nov 8 00:30:57.084631 containerd[1513]: time="2025-11-08T00:30:57.083273126Z" level=error msg="Failed to destroy network for sandbox \"1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.084747 kubelet[2635]: E1108 00:30:57.084204 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.084788 kubelet[2635]: E1108 00:30:57.084767 2635 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-97cb9cc46-6qg4s" Nov 8 00:30:57.084814 kubelet[2635]: E1108 00:30:57.084793 2635 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-97cb9cc46-6qg4s" Nov 8 00:30:57.084890 kubelet[2635]: E1108 00:30:57.084837 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-97cb9cc46-6qg4s_calico-system(6654d39b-215e-4d8c-8e26-48e297d85f8d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-97cb9cc46-6qg4s_calico-system(6654d39b-215e-4d8c-8e26-48e297d85f8d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-97cb9cc46-6qg4s" podUID="6654d39b-215e-4d8c-8e26-48e297d85f8d" Nov 8 00:30:57.085149 kubelet[2635]: E1108 00:30:57.084596 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64c9474b6f-jbmnh_calico-apiserver(a375dcd8-3fbe-482e-815c-13c9165d26d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64c9474b6f-jbmnh_calico-apiserver(a375dcd8-3fbe-482e-815c-13c9165d26d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64c9474b6f-jbmnh" podUID="a375dcd8-3fbe-482e-815c-13c9165d26d2" Nov 8 00:30:57.087532 kubelet[2635]: E1108 00:30:57.087478 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.087630 kubelet[2635]: E1108 00:30:57.087543 2635 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b858bf6c9-q5c9k" Nov 8 00:30:57.087630 kubelet[2635]: E1108 00:30:57.087570 2635 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5b858bf6c9-q5c9k" Nov 8 00:30:57.088535 kubelet[2635]: E1108 00:30:57.087715 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5b858bf6c9-q5c9k_calico-system(7bfc75bc-86f9-445a-9d64-08b33fc703e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5b858bf6c9-q5c9k_calico-system(7bfc75bc-86f9-445a-9d64-08b33fc703e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b858bf6c9-q5c9k" podUID="7bfc75bc-86f9-445a-9d64-08b33fc703e4" Nov 8 00:30:57.088618 containerd[1513]: time="2025-11-08T00:30:57.083337215Z" level=error msg="Failed to destroy network for sandbox \"d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.088618 containerd[1513]: time="2025-11-08T00:30:57.085682939Z" level=error msg="Failed to destroy network for sandbox \"254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.088868 containerd[1513]: time="2025-11-08T00:30:57.088846792Z" level=error msg="encountered an error cleaning up failed sandbox \"254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.088972 containerd[1513]: time="2025-11-08T00:30:57.088955094Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dcdpj,Uid:0f765679-4f0d-4ea9-957a-c1950533f8b3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.089110 containerd[1513]: time="2025-11-08T00:30:57.085872261Z" level=error msg="encountered an error cleaning up failed sandbox \"1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.089235 containerd[1513]: time="2025-11-08T00:30:57.089211050Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-p4b9r,Uid:8b84b572-8061-4813-b462-3eea6f974bdd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.089578 kubelet[2635]: E1108 00:30:57.089552 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.089839 kubelet[2635]: E1108 00:30:57.089729 2635 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dcdpj" Nov 8 00:30:57.089839 kubelet[2635]: E1108 00:30:57.089791 2635 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dcdpj" Nov 8 00:30:57.089839 kubelet[2635]: E1108 00:30:57.089553 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.090050 kubelet[2635]: E1108 00:30:57.089947 2635 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-p4b9r" Nov 8 00:30:57.090050 kubelet[2635]: E1108 00:30:57.089978 2635 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-p4b9r" Nov 8 00:30:57.090050 kubelet[2635]: E1108 00:30:57.089985 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dcdpj_kube-system(0f765679-4f0d-4ea9-957a-c1950533f8b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dcdpj_kube-system(0f765679-4f0d-4ea9-957a-c1950533f8b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dcdpj" podUID="0f765679-4f0d-4ea9-957a-c1950533f8b3" Nov 8 00:30:57.090837 kubelet[2635]: E1108 00:30:57.090023 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-p4b9r_calico-system(8b84b572-8061-4813-b462-3eea6f974bdd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-p4b9r_calico-system(8b84b572-8061-4813-b462-3eea6f974bdd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-p4b9r" podUID="8b84b572-8061-4813-b462-3eea6f974bdd" Nov 8 00:30:57.090837 kubelet[2635]: E1108 00:30:57.090363 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.090837 kubelet[2635]: E1108 00:30:57.090389 2635 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-lf6dp" Nov 8 00:30:57.090929 containerd[1513]: time="2025-11-08T00:30:57.090150703Z" level=error msg="encountered an error cleaning up failed sandbox \"d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.090929 containerd[1513]: time="2025-11-08T00:30:57.090221424Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lf6dp,Uid:0bf2f9f4-97c3-4ca2-a71b-3b614c8ed2e5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.090929 containerd[1513]: time="2025-11-08T00:30:57.090571144Z" level=error msg="Failed to destroy network for sandbox \"035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.091400 kubelet[2635]: E1108 00:30:57.090404 2635 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-lf6dp" Nov 8 00:30:57.091400 kubelet[2635]: E1108 00:30:57.090445 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-lf6dp_kube-system(0bf2f9f4-97c3-4ca2-a71b-3b614c8ed2e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-lf6dp_kube-system(0bf2f9f4-97c3-4ca2-a71b-3b614c8ed2e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-lf6dp" podUID="0bf2f9f4-97c3-4ca2-a71b-3b614c8ed2e5" Nov 8 00:30:57.091400 kubelet[2635]: E1108 00:30:57.091203 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.091582 containerd[1513]: time="2025-11-08T00:30:57.090944296Z" level=error msg="encountered an error cleaning up failed sandbox \"035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.091582 containerd[1513]: time="2025-11-08T00:30:57.090983228Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64c9474b6f-brv5c,Uid:562850b7-c26f-461d-bfe8-c8a199e1bb8c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.091719 kubelet[2635]: E1108 00:30:57.091232 2635 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64c9474b6f-brv5c" Nov 8 00:30:57.091719 kubelet[2635]: E1108 00:30:57.091246 2635 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64c9474b6f-brv5c" Nov 8 00:30:57.091719 kubelet[2635]: E1108 00:30:57.091297 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64c9474b6f-brv5c_calico-apiserver(562850b7-c26f-461d-bfe8-c8a199e1bb8c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64c9474b6f-brv5c_calico-apiserver(562850b7-c26f-461d-bfe8-c8a199e1bb8c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64c9474b6f-brv5c" podUID="562850b7-c26f-461d-bfe8-c8a199e1bb8c" Nov 8 00:30:57.440442 systemd[1]: Created slice kubepods-besteffort-pod7a730453_478d_46fd_915f_5cbf5e28b105.slice - libcontainer container kubepods-besteffort-pod7a730453_478d_46fd_915f_5cbf5e28b105.slice. Nov 8 00:30:57.445906 containerd[1513]: time="2025-11-08T00:30:57.445817148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m4hcx,Uid:7a730453-478d-46fd-915f-5cbf5e28b105,Namespace:calico-system,Attempt:0,}" Nov 8 00:30:57.525449 containerd[1513]: time="2025-11-08T00:30:57.525135645Z" level=error msg="Failed to destroy network for sandbox \"3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.525656 containerd[1513]: time="2025-11-08T00:30:57.525515350Z" level=error msg="encountered an error cleaning up failed sandbox \"3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.525656 containerd[1513]: time="2025-11-08T00:30:57.525576002Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m4hcx,Uid:7a730453-478d-46fd-915f-5cbf5e28b105,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.526717 kubelet[2635]: E1108 00:30:57.525890 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.526717 kubelet[2635]: E1108 00:30:57.525971 2635 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-m4hcx" Nov 8 00:30:57.526717 kubelet[2635]: E1108 00:30:57.526042 2635 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-m4hcx" Nov 8 00:30:57.527321 kubelet[2635]: E1108 00:30:57.526137 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-m4hcx_calico-system(7a730453-478d-46fd-915f-5cbf5e28b105)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-m4hcx_calico-system(7a730453-478d-46fd-915f-5cbf5e28b105)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-m4hcx" podUID="7a730453-478d-46fd-915f-5cbf5e28b105" Nov 8 00:30:57.564095 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5-shm.mount: Deactivated successfully. Nov 8 00:30:57.607466 kubelet[2635]: I1108 00:30:57.607341 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" Nov 8 00:30:57.612031 kubelet[2635]: I1108 00:30:57.611968 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" Nov 8 00:30:57.641836 kubelet[2635]: I1108 00:30:57.640722 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" Nov 8 00:30:57.647403 kubelet[2635]: I1108 00:30:57.644947 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" Nov 8 00:30:57.647403 kubelet[2635]: I1108 00:30:57.646837 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" Nov 8 00:30:57.650131 kubelet[2635]: I1108 00:30:57.648919 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" Nov 8 00:30:57.652490 kubelet[2635]: I1108 00:30:57.651347 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" Nov 8 00:30:57.653002 kubelet[2635]: I1108 00:30:57.652969 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" Nov 8 00:30:57.674253 containerd[1513]: time="2025-11-08T00:30:57.672187894Z" level=info msg="StopPodSandbox for \"df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a\"" Nov 8 00:30:57.675297 containerd[1513]: time="2025-11-08T00:30:57.675230482Z" level=info msg="Ensure that sandbox df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a in task-service has been cleanup successfully" Nov 8 00:30:57.675873 containerd[1513]: time="2025-11-08T00:30:57.675806251Z" level=info msg="StopPodSandbox for \"1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23\"" Nov 8 00:30:57.678479 containerd[1513]: time="2025-11-08T00:30:57.677138574Z" level=info msg="StopPodSandbox for \"035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3\"" Nov 8 00:30:57.678564 containerd[1513]: time="2025-11-08T00:30:57.678487398Z" level=info msg="Ensure that sandbox 035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3 in task-service has been cleanup successfully" Nov 8 00:30:57.680951 containerd[1513]: time="2025-11-08T00:30:57.680037916Z" level=info msg="Ensure that sandbox 1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23 in task-service has been cleanup successfully" Nov 8 00:30:57.680951 containerd[1513]: time="2025-11-08T00:30:57.677328246Z" level=info msg="StopPodSandbox for \"3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b\"" Nov 8 00:30:57.680951 containerd[1513]: time="2025-11-08T00:30:57.677237136Z" level=info msg="StopPodSandbox for \"27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9\"" Nov 8 00:30:57.682190 containerd[1513]: time="2025-11-08T00:30:57.681370830Z" level=info msg="Ensure that sandbox 27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9 in task-service has been cleanup successfully" Nov 8 00:30:57.683115 containerd[1513]: time="2025-11-08T00:30:57.682529110Z" level=info msg="Ensure that sandbox 3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b in task-service has been cleanup successfully" Nov 8 00:30:57.684526 containerd[1513]: time="2025-11-08T00:30:57.677282601Z" level=info msg="StopPodSandbox for \"254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5\"" Nov 8 00:30:57.684981 containerd[1513]: time="2025-11-08T00:30:57.684939775Z" level=info msg="Ensure that sandbox 254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5 in task-service has been cleanup successfully" Nov 8 00:30:57.687955 containerd[1513]: time="2025-11-08T00:30:57.677170985Z" level=info msg="StopPodSandbox for \"26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190\"" Nov 8 00:30:57.688371 containerd[1513]: time="2025-11-08T00:30:57.677309101Z" level=info msg="StopPodSandbox for \"d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c\"" Nov 8 00:30:57.689054 containerd[1513]: time="2025-11-08T00:30:57.688913522Z" level=info msg="Ensure that sandbox d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c in task-service has been cleanup successfully" Nov 8 00:30:57.694961 containerd[1513]: time="2025-11-08T00:30:57.694744054Z" level=info msg="Ensure that sandbox 26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190 in task-service has been cleanup successfully" Nov 8 00:30:57.802506 containerd[1513]: time="2025-11-08T00:30:57.801706656Z" level=error msg="StopPodSandbox for \"254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5\" failed" error="failed to destroy network for sandbox \"254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.802734 kubelet[2635]: E1108 00:30:57.802195 2635 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" Nov 8 00:30:57.802734 kubelet[2635]: E1108 00:30:57.802279 2635 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5"} Nov 8 00:30:57.802734 kubelet[2635]: E1108 00:30:57.802376 2635 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0f765679-4f0d-4ea9-957a-c1950533f8b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:30:57.802734 kubelet[2635]: E1108 00:30:57.802458 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0f765679-4f0d-4ea9-957a-c1950533f8b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dcdpj" podUID="0f765679-4f0d-4ea9-957a-c1950533f8b3" Nov 8 00:30:57.848357 containerd[1513]: time="2025-11-08T00:30:57.848177208Z" level=error msg="StopPodSandbox for \"d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c\" failed" error="failed to destroy network for sandbox \"d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.849455 kubelet[2635]: E1108 00:30:57.848894 2635 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" Nov 8 00:30:57.849455 kubelet[2635]: E1108 00:30:57.849000 2635 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c"} Nov 8 00:30:57.849455 kubelet[2635]: E1108 00:30:57.849070 2635 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0bf2f9f4-97c3-4ca2-a71b-3b614c8ed2e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:30:57.849455 kubelet[2635]: E1108 00:30:57.849110 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0bf2f9f4-97c3-4ca2-a71b-3b614c8ed2e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-lf6dp" podUID="0bf2f9f4-97c3-4ca2-a71b-3b614c8ed2e5" Nov 8 00:30:57.852502 containerd[1513]: time="2025-11-08T00:30:57.852312264Z" level=error msg="StopPodSandbox for \"1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23\" failed" error="failed to destroy network for sandbox \"1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.852685 kubelet[2635]: E1108 00:30:57.852652 2635 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" Nov 8 00:30:57.852763 kubelet[2635]: E1108 00:30:57.852706 2635 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23"} Nov 8 00:30:57.852840 kubelet[2635]: E1108 00:30:57.852786 2635 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8b84b572-8061-4813-b462-3eea6f974bdd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:30:57.852957 kubelet[2635]: E1108 00:30:57.852849 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8b84b572-8061-4813-b462-3eea6f974bdd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-p4b9r" podUID="8b84b572-8061-4813-b462-3eea6f974bdd" Nov 8 00:30:57.859852 containerd[1513]: time="2025-11-08T00:30:57.858760825Z" level=error msg="StopPodSandbox for \"035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3\" failed" error="failed to destroy network for sandbox \"035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.859852 containerd[1513]: time="2025-11-08T00:30:57.859591507Z" level=error msg="StopPodSandbox for \"26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190\" failed" error="failed to destroy network for sandbox \"26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.860066 kubelet[2635]: E1108 00:30:57.859200 2635 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" Nov 8 00:30:57.860066 kubelet[2635]: E1108 00:30:57.859297 2635 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3"} Nov 8 00:30:57.860066 kubelet[2635]: E1108 00:30:57.859359 2635 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"562850b7-c26f-461d-bfe8-c8a199e1bb8c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:30:57.860066 kubelet[2635]: E1108 00:30:57.859393 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"562850b7-c26f-461d-bfe8-c8a199e1bb8c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64c9474b6f-brv5c" podUID="562850b7-c26f-461d-bfe8-c8a199e1bb8c" Nov 8 00:30:57.860246 kubelet[2635]: E1108 00:30:57.859815 2635 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" Nov 8 00:30:57.860246 kubelet[2635]: E1108 00:30:57.859868 2635 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190"} Nov 8 00:30:57.860246 kubelet[2635]: E1108 00:30:57.859888 2635 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a375dcd8-3fbe-482e-815c-13c9165d26d2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:30:57.860246 kubelet[2635]: E1108 00:30:57.859905 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a375dcd8-3fbe-482e-815c-13c9165d26d2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64c9474b6f-jbmnh" podUID="a375dcd8-3fbe-482e-815c-13c9165d26d2" Nov 8 00:30:57.863680 containerd[1513]: time="2025-11-08T00:30:57.860081416Z" level=error msg="StopPodSandbox for \"3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b\" failed" error="failed to destroy network for sandbox \"3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.863735 kubelet[2635]: E1108 00:30:57.860194 2635 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" Nov 8 00:30:57.863735 kubelet[2635]: E1108 00:30:57.860216 2635 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b"} Nov 8 00:30:57.863735 kubelet[2635]: E1108 00:30:57.860242 2635 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7a730453-478d-46fd-915f-5cbf5e28b105\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:30:57.863735 kubelet[2635]: E1108 00:30:57.860260 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7a730453-478d-46fd-915f-5cbf5e28b105\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-m4hcx" podUID="7a730453-478d-46fd-915f-5cbf5e28b105" Nov 8 00:30:57.866747 containerd[1513]: time="2025-11-08T00:30:57.866677079Z" level=error msg="StopPodSandbox for \"27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9\" failed" error="failed to destroy network for sandbox \"27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.867690 kubelet[2635]: E1108 00:30:57.867129 2635 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" Nov 8 00:30:57.867690 kubelet[2635]: E1108 00:30:57.867220 2635 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9"} Nov 8 00:30:57.867690 kubelet[2635]: E1108 00:30:57.867275 2635 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7bfc75bc-86f9-445a-9d64-08b33fc703e4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:30:57.867690 kubelet[2635]: E1108 00:30:57.867326 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7bfc75bc-86f9-445a-9d64-08b33fc703e4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5b858bf6c9-q5c9k" podUID="7bfc75bc-86f9-445a-9d64-08b33fc703e4" Nov 8 00:30:57.868590 containerd[1513]: time="2025-11-08T00:30:57.868531161Z" level=error msg="StopPodSandbox for \"df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a\" failed" error="failed to destroy network for sandbox \"df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:57.869530 kubelet[2635]: E1108 00:30:57.868777 2635 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" Nov 8 00:30:57.869530 kubelet[2635]: E1108 00:30:57.868832 2635 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a"} Nov 8 00:30:57.869530 kubelet[2635]: E1108 00:30:57.868860 2635 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6654d39b-215e-4d8c-8e26-48e297d85f8d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:30:57.869530 kubelet[2635]: E1108 00:30:57.868899 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6654d39b-215e-4d8c-8e26-48e297d85f8d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-97cb9cc46-6qg4s" podUID="6654d39b-215e-4d8c-8e26-48e297d85f8d" Nov 8 00:30:59.244283 kubelet[2635]: I1108 00:30:59.243477 2635 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:31:02.365010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4269824003.mount: Deactivated successfully. Nov 8 00:31:02.530648 containerd[1513]: time="2025-11-08T00:31:02.526791334Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 8 00:31:02.533142 containerd[1513]: time="2025-11-08T00:31:02.532245893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:02.577775 containerd[1513]: time="2025-11-08T00:31:02.577713803Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:02.579360 containerd[1513]: time="2025-11-08T00:31:02.579291605Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 5.97354955s" Nov 8 00:31:02.579561 containerd[1513]: time="2025-11-08T00:31:02.579511895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 8 00:31:02.579755 containerd[1513]: time="2025-11-08T00:31:02.579568921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:02.637075 containerd[1513]: time="2025-11-08T00:31:02.636939554Z" level=info msg="CreateContainer within sandbox \"beaeac10c3214d1473c3efe47735eedee9c7e2c818f08131cdbd10002037185b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 00:31:02.761117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3418640850.mount: Deactivated successfully. Nov 8 00:31:02.779956 containerd[1513]: time="2025-11-08T00:31:02.779898729Z" level=info msg="CreateContainer within sandbox \"beaeac10c3214d1473c3efe47735eedee9c7e2c818f08131cdbd10002037185b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7f1ed33d51332844a6ec73571734552808c726b1be96df876bb95160fb50c88b\"" Nov 8 00:31:02.794915 containerd[1513]: time="2025-11-08T00:31:02.794508985Z" level=info msg="StartContainer for \"7f1ed33d51332844a6ec73571734552808c726b1be96df876bb95160fb50c88b\"" Nov 8 00:31:03.007670 systemd[1]: Started cri-containerd-7f1ed33d51332844a6ec73571734552808c726b1be96df876bb95160fb50c88b.scope - libcontainer container 7f1ed33d51332844a6ec73571734552808c726b1be96df876bb95160fb50c88b. Nov 8 00:31:03.077154 containerd[1513]: time="2025-11-08T00:31:03.076966395Z" level=info msg="StartContainer for \"7f1ed33d51332844a6ec73571734552808c726b1be96df876bb95160fb50c88b\" returns successfully" Nov 8 00:31:03.174338 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 00:31:03.176217 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 00:31:03.415572 containerd[1513]: time="2025-11-08T00:31:03.415427514Z" level=info msg="StopPodSandbox for \"df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a\"" Nov 8 00:31:03.765249 containerd[1513]: 2025-11-08 00:31:03.502 [INFO][3803] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" Nov 8 00:31:03.765249 containerd[1513]: 2025-11-08 00:31:03.506 [INFO][3803] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" iface="eth0" netns="/var/run/netns/cni-db8bed0a-0fb2-ceb7-25c9-eb167549cccd" Nov 8 00:31:03.765249 containerd[1513]: 2025-11-08 00:31:03.506 [INFO][3803] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" iface="eth0" netns="/var/run/netns/cni-db8bed0a-0fb2-ceb7-25c9-eb167549cccd" Nov 8 00:31:03.765249 containerd[1513]: 2025-11-08 00:31:03.509 [INFO][3803] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" iface="eth0" netns="/var/run/netns/cni-db8bed0a-0fb2-ceb7-25c9-eb167549cccd" Nov 8 00:31:03.765249 containerd[1513]: 2025-11-08 00:31:03.509 [INFO][3803] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" Nov 8 00:31:03.765249 containerd[1513]: 2025-11-08 00:31:03.509 [INFO][3803] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" Nov 8 00:31:03.765249 containerd[1513]: 2025-11-08 00:31:03.724 [INFO][3810] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" HandleID="k8s-pod-network.df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" Workload="ci--4081--3--6--n--dcea41702a-k8s-whisker--97cb9cc46--6qg4s-eth0" Nov 8 00:31:03.765249 containerd[1513]: 2025-11-08 00:31:03.726 [INFO][3810] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:03.765249 containerd[1513]: 2025-11-08 00:31:03.727 [INFO][3810] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:03.765249 containerd[1513]: 2025-11-08 00:31:03.752 [WARNING][3810] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" HandleID="k8s-pod-network.df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" Workload="ci--4081--3--6--n--dcea41702a-k8s-whisker--97cb9cc46--6qg4s-eth0" Nov 8 00:31:03.765249 containerd[1513]: 2025-11-08 00:31:03.753 [INFO][3810] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" HandleID="k8s-pod-network.df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" Workload="ci--4081--3--6--n--dcea41702a-k8s-whisker--97cb9cc46--6qg4s-eth0" Nov 8 00:31:03.765249 containerd[1513]: 2025-11-08 00:31:03.758 [INFO][3810] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:03.765249 containerd[1513]: 2025-11-08 00:31:03.761 [INFO][3803] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" Nov 8 00:31:03.768194 containerd[1513]: time="2025-11-08T00:31:03.766306377Z" level=info msg="TearDown network for sandbox \"df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a\" successfully" Nov 8 00:31:03.768194 containerd[1513]: time="2025-11-08T00:31:03.766332044Z" level=info msg="StopPodSandbox for \"df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a\" returns successfully" Nov 8 00:31:03.774057 systemd[1]: run-netns-cni\x2ddb8bed0a\x2d0fb2\x2dceb7\x2d25c9\x2deb167549cccd.mount: Deactivated successfully. Nov 8 00:31:03.958006 kubelet[2635]: I1108 00:31:03.957545 2635 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6654d39b-215e-4d8c-8e26-48e297d85f8d-whisker-ca-bundle\") pod \"6654d39b-215e-4d8c-8e26-48e297d85f8d\" (UID: \"6654d39b-215e-4d8c-8e26-48e297d85f8d\") " Nov 8 00:31:03.958006 kubelet[2635]: I1108 00:31:03.957622 2635 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6654d39b-215e-4d8c-8e26-48e297d85f8d-whisker-backend-key-pair\") pod \"6654d39b-215e-4d8c-8e26-48e297d85f8d\" (UID: \"6654d39b-215e-4d8c-8e26-48e297d85f8d\") " Nov 8 00:31:03.958006 kubelet[2635]: I1108 00:31:03.957655 2635 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvsmw\" (UniqueName: \"kubernetes.io/projected/6654d39b-215e-4d8c-8e26-48e297d85f8d-kube-api-access-vvsmw\") pod \"6654d39b-215e-4d8c-8e26-48e297d85f8d\" (UID: \"6654d39b-215e-4d8c-8e26-48e297d85f8d\") " Nov 8 00:31:03.965476 kubelet[2635]: I1108 00:31:03.964231 2635 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6654d39b-215e-4d8c-8e26-48e297d85f8d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6654d39b-215e-4d8c-8e26-48e297d85f8d" (UID: "6654d39b-215e-4d8c-8e26-48e297d85f8d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:31:03.971131 kubelet[2635]: I1108 00:31:03.971079 2635 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6654d39b-215e-4d8c-8e26-48e297d85f8d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6654d39b-215e-4d8c-8e26-48e297d85f8d" (UID: "6654d39b-215e-4d8c-8e26-48e297d85f8d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:31:03.974491 systemd[1]: var-lib-kubelet-pods-6654d39b\x2d215e\x2d4d8c\x2d8e26\x2d48e297d85f8d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 8 00:31:03.978668 kubelet[2635]: I1108 00:31:03.978617 2635 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6654d39b-215e-4d8c-8e26-48e297d85f8d-kube-api-access-vvsmw" (OuterVolumeSpecName: "kube-api-access-vvsmw") pod "6654d39b-215e-4d8c-8e26-48e297d85f8d" (UID: "6654d39b-215e-4d8c-8e26-48e297d85f8d"). InnerVolumeSpecName "kube-api-access-vvsmw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:31:03.980450 systemd[1]: var-lib-kubelet-pods-6654d39b\x2d215e\x2d4d8c\x2d8e26\x2d48e297d85f8d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvvsmw.mount: Deactivated successfully. Nov 8 00:31:04.058215 kubelet[2635]: I1108 00:31:04.058017 2635 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6654d39b-215e-4d8c-8e26-48e297d85f8d-whisker-ca-bundle\") on node \"ci-4081-3-6-n-dcea41702a\" DevicePath \"\"" Nov 8 00:31:04.058215 kubelet[2635]: I1108 00:31:04.058069 2635 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6654d39b-215e-4d8c-8e26-48e297d85f8d-whisker-backend-key-pair\") on node \"ci-4081-3-6-n-dcea41702a\" DevicePath \"\"" Nov 8 00:31:04.058215 kubelet[2635]: I1108 00:31:04.058092 2635 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vvsmw\" (UniqueName: \"kubernetes.io/projected/6654d39b-215e-4d8c-8e26-48e297d85f8d-kube-api-access-vvsmw\") on node \"ci-4081-3-6-n-dcea41702a\" DevicePath \"\"" Nov 8 00:31:04.503054 systemd[1]: Removed slice kubepods-besteffort-pod6654d39b_215e_4d8c_8e26_48e297d85f8d.slice - libcontainer container kubepods-besteffort-pod6654d39b_215e_4d8c_8e26_48e297d85f8d.slice. Nov 8 00:31:04.806755 kubelet[2635]: I1108 00:31:04.799266 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bnsqs" podStartSLOduration=3.451012321 podStartE2EDuration="17.789473975s" podCreationTimestamp="2025-11-08 00:30:47 +0000 UTC" firstStartedPulling="2025-11-08 00:30:48.242264878 +0000 UTC m=+19.980760751" lastFinishedPulling="2025-11-08 00:31:02.580726532 +0000 UTC m=+34.319222405" observedRunningTime="2025-11-08 00:31:03.779957336 +0000 UTC m=+35.518453209" watchObservedRunningTime="2025-11-08 00:31:04.789473975 +0000 UTC m=+36.527969849" Nov 8 00:31:04.882647 systemd[1]: Created slice kubepods-besteffort-podb4091b12_a1d1_44f0_9f31_4fa77fdbf9a2.slice - libcontainer container kubepods-besteffort-podb4091b12_a1d1_44f0_9f31_4fa77fdbf9a2.slice. Nov 8 00:31:04.901100 kubelet[2635]: I1108 00:31:04.901068 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b4091b12-a1d1-44f0-9f31-4fa77fdbf9a2-whisker-backend-key-pair\") pod \"whisker-989fcc88-m4sv7\" (UID: \"b4091b12-a1d1-44f0-9f31-4fa77fdbf9a2\") " pod="calico-system/whisker-989fcc88-m4sv7" Nov 8 00:31:04.901100 kubelet[2635]: I1108 00:31:04.901103 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dshgt\" (UniqueName: \"kubernetes.io/projected/b4091b12-a1d1-44f0-9f31-4fa77fdbf9a2-kube-api-access-dshgt\") pod \"whisker-989fcc88-m4sv7\" (UID: \"b4091b12-a1d1-44f0-9f31-4fa77fdbf9a2\") " pod="calico-system/whisker-989fcc88-m4sv7" Nov 8 00:31:04.901246 kubelet[2635]: I1108 00:31:04.901122 2635 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4091b12-a1d1-44f0-9f31-4fa77fdbf9a2-whisker-ca-bundle\") pod \"whisker-989fcc88-m4sv7\" (UID: \"b4091b12-a1d1-44f0-9f31-4fa77fdbf9a2\") " pod="calico-system/whisker-989fcc88-m4sv7" Nov 8 00:31:05.156545 kernel: bpftool[3999]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 00:31:05.204856 containerd[1513]: time="2025-11-08T00:31:05.204680611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-989fcc88-m4sv7,Uid:b4091b12-a1d1-44f0-9f31-4fa77fdbf9a2,Namespace:calico-system,Attempt:0,}" Nov 8 00:31:05.408004 systemd-networkd[1395]: calif994ada9fe4: Link UP Nov 8 00:31:05.408970 systemd-networkd[1395]: calif994ada9fe4: Gained carrier Nov 8 00:31:05.438063 containerd[1513]: 2025-11-08 00:31:05.305 [INFO][4000] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--dcea41702a-k8s-whisker--989fcc88--m4sv7-eth0 whisker-989fcc88- calico-system b4091b12-a1d1-44f0-9f31-4fa77fdbf9a2 889 0 2025-11-08 00:31:04 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:989fcc88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-n-dcea41702a whisker-989fcc88-m4sv7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif994ada9fe4 [] [] }} ContainerID="2ddd937bf59409ee47a9ffdd3829ba144215436dd149a82180511ce81dff0293" Namespace="calico-system" Pod="whisker-989fcc88-m4sv7" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-whisker--989fcc88--m4sv7-" Nov 8 00:31:05.438063 containerd[1513]: 2025-11-08 00:31:05.305 [INFO][4000] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2ddd937bf59409ee47a9ffdd3829ba144215436dd149a82180511ce81dff0293" Namespace="calico-system" Pod="whisker-989fcc88-m4sv7" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-whisker--989fcc88--m4sv7-eth0" Nov 8 00:31:05.438063 containerd[1513]: 2025-11-08 00:31:05.334 [INFO][4012] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ddd937bf59409ee47a9ffdd3829ba144215436dd149a82180511ce81dff0293" HandleID="k8s-pod-network.2ddd937bf59409ee47a9ffdd3829ba144215436dd149a82180511ce81dff0293" Workload="ci--4081--3--6--n--dcea41702a-k8s-whisker--989fcc88--m4sv7-eth0" Nov 8 00:31:05.438063 containerd[1513]: 2025-11-08 00:31:05.335 [INFO][4012] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2ddd937bf59409ee47a9ffdd3829ba144215436dd149a82180511ce81dff0293" HandleID="k8s-pod-network.2ddd937bf59409ee47a9ffdd3829ba144215436dd149a82180511ce81dff0293" Workload="ci--4081--3--6--n--dcea41702a-k8s-whisker--989fcc88--m4sv7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-dcea41702a", "pod":"whisker-989fcc88-m4sv7", "timestamp":"2025-11-08 00:31:05.334616856 +0000 UTC"}, Hostname:"ci-4081-3-6-n-dcea41702a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:31:05.438063 containerd[1513]: 2025-11-08 00:31:05.335 [INFO][4012] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:05.438063 containerd[1513]: 2025-11-08 00:31:05.335 [INFO][4012] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:05.438063 containerd[1513]: 2025-11-08 00:31:05.335 [INFO][4012] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-dcea41702a' Nov 8 00:31:05.438063 containerd[1513]: 2025-11-08 00:31:05.342 [INFO][4012] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2ddd937bf59409ee47a9ffdd3829ba144215436dd149a82180511ce81dff0293" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:05.438063 containerd[1513]: 2025-11-08 00:31:05.363 [INFO][4012] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:05.438063 containerd[1513]: 2025-11-08 00:31:05.368 [INFO][4012] ipam/ipam.go 511: Trying affinity for 192.168.58.0/26 host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:05.438063 containerd[1513]: 2025-11-08 00:31:05.370 [INFO][4012] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.0/26 host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:05.438063 containerd[1513]: 2025-11-08 00:31:05.372 [INFO][4012] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.0/26 host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:05.438063 containerd[1513]: 2025-11-08 00:31:05.372 [INFO][4012] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.58.0/26 handle="k8s-pod-network.2ddd937bf59409ee47a9ffdd3829ba144215436dd149a82180511ce81dff0293" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:05.438063 containerd[1513]: 2025-11-08 00:31:05.374 [INFO][4012] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2ddd937bf59409ee47a9ffdd3829ba144215436dd149a82180511ce81dff0293 Nov 8 00:31:05.438063 containerd[1513]: 2025-11-08 00:31:05.380 [INFO][4012] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.58.0/26 handle="k8s-pod-network.2ddd937bf59409ee47a9ffdd3829ba144215436dd149a82180511ce81dff0293" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:05.438063 containerd[1513]: 2025-11-08 00:31:05.386 [INFO][4012] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.58.1/26] block=192.168.58.0/26 handle="k8s-pod-network.2ddd937bf59409ee47a9ffdd3829ba144215436dd149a82180511ce81dff0293" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:05.438063 containerd[1513]: 2025-11-08 00:31:05.386 [INFO][4012] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.1/26] handle="k8s-pod-network.2ddd937bf59409ee47a9ffdd3829ba144215436dd149a82180511ce81dff0293" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:05.438063 containerd[1513]: 2025-11-08 00:31:05.386 [INFO][4012] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:05.438063 containerd[1513]: 2025-11-08 00:31:05.386 [INFO][4012] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.58.1/26] IPv6=[] ContainerID="2ddd937bf59409ee47a9ffdd3829ba144215436dd149a82180511ce81dff0293" HandleID="k8s-pod-network.2ddd937bf59409ee47a9ffdd3829ba144215436dd149a82180511ce81dff0293" Workload="ci--4081--3--6--n--dcea41702a-k8s-whisker--989fcc88--m4sv7-eth0" Nov 8 00:31:05.439285 containerd[1513]: 2025-11-08 00:31:05.391 [INFO][4000] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2ddd937bf59409ee47a9ffdd3829ba144215436dd149a82180511ce81dff0293" Namespace="calico-system" Pod="whisker-989fcc88-m4sv7" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-whisker--989fcc88--m4sv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--dcea41702a-k8s-whisker--989fcc88--m4sv7-eth0", GenerateName:"whisker-989fcc88-", Namespace:"calico-system", SelfLink:"", UID:"b4091b12-a1d1-44f0-9f31-4fa77fdbf9a2", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"989fcc88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-dcea41702a", ContainerID:"", Pod:"whisker-989fcc88-m4sv7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.58.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif994ada9fe4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:05.439285 containerd[1513]: 2025-11-08 00:31:05.392 [INFO][4000] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.1/32] ContainerID="2ddd937bf59409ee47a9ffdd3829ba144215436dd149a82180511ce81dff0293" Namespace="calico-system" Pod="whisker-989fcc88-m4sv7" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-whisker--989fcc88--m4sv7-eth0" Nov 8 00:31:05.439285 containerd[1513]: 2025-11-08 00:31:05.392 [INFO][4000] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif994ada9fe4 ContainerID="2ddd937bf59409ee47a9ffdd3829ba144215436dd149a82180511ce81dff0293" Namespace="calico-system" Pod="whisker-989fcc88-m4sv7" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-whisker--989fcc88--m4sv7-eth0" Nov 8 00:31:05.439285 containerd[1513]: 2025-11-08 00:31:05.407 [INFO][4000] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2ddd937bf59409ee47a9ffdd3829ba144215436dd149a82180511ce81dff0293" Namespace="calico-system" Pod="whisker-989fcc88-m4sv7" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-whisker--989fcc88--m4sv7-eth0" Nov 8 00:31:05.439285 containerd[1513]: 2025-11-08 00:31:05.407 [INFO][4000] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2ddd937bf59409ee47a9ffdd3829ba144215436dd149a82180511ce81dff0293" Namespace="calico-system" Pod="whisker-989fcc88-m4sv7" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-whisker--989fcc88--m4sv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--dcea41702a-k8s-whisker--989fcc88--m4sv7-eth0", GenerateName:"whisker-989fcc88-", Namespace:"calico-system", SelfLink:"", UID:"b4091b12-a1d1-44f0-9f31-4fa77fdbf9a2", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"989fcc88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-dcea41702a", ContainerID:"2ddd937bf59409ee47a9ffdd3829ba144215436dd149a82180511ce81dff0293", Pod:"whisker-989fcc88-m4sv7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.58.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif994ada9fe4", MAC:"62:73:e7:5e:5d:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:05.439285 containerd[1513]: 2025-11-08 00:31:05.432 [INFO][4000] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2ddd937bf59409ee47a9ffdd3829ba144215436dd149a82180511ce81dff0293" Namespace="calico-system" Pod="whisker-989fcc88-m4sv7" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-whisker--989fcc88--m4sv7-eth0" Nov 8 00:31:05.490120 systemd-networkd[1395]: vxlan.calico: Link UP Nov 8 00:31:05.490129 systemd-networkd[1395]: vxlan.calico: Gained carrier Nov 8 00:31:05.504515 containerd[1513]: time="2025-11-08T00:31:05.504368265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:05.504697 containerd[1513]: time="2025-11-08T00:31:05.504673281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:05.508048 containerd[1513]: time="2025-11-08T00:31:05.507257337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:05.513115 containerd[1513]: time="2025-11-08T00:31:05.509808923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:05.540267 systemd[1]: Started cri-containerd-2ddd937bf59409ee47a9ffdd3829ba144215436dd149a82180511ce81dff0293.scope - libcontainer container 2ddd937bf59409ee47a9ffdd3829ba144215436dd149a82180511ce81dff0293. Nov 8 00:31:05.598589 containerd[1513]: time="2025-11-08T00:31:05.598357379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-989fcc88-m4sv7,Uid:b4091b12-a1d1-44f0-9f31-4fa77fdbf9a2,Namespace:calico-system,Attempt:0,} returns sandbox id \"2ddd937bf59409ee47a9ffdd3829ba144215436dd149a82180511ce81dff0293\"" Nov 8 00:31:05.607918 containerd[1513]: time="2025-11-08T00:31:05.607785595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:31:06.051173 containerd[1513]: time="2025-11-08T00:31:06.051051968Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:06.068009 containerd[1513]: time="2025-11-08T00:31:06.053569139Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:31:06.068196 containerd[1513]: time="2025-11-08T00:31:06.053640472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:31:06.068559 kubelet[2635]: E1108 00:31:06.068435 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:31:06.069330 kubelet[2635]: E1108 00:31:06.068557 2635 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:31:06.076614 kubelet[2635]: E1108 00:31:06.076489 2635 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d7e33296a79e41f49551734f14da812b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dshgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-989fcc88-m4sv7_calico-system(b4091b12-a1d1-44f0-9f31-4fa77fdbf9a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:06.111063 containerd[1513]: time="2025-11-08T00:31:06.111001463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:31:06.434999 kubelet[2635]: I1108 00:31:06.434825 2635 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6654d39b-215e-4d8c-8e26-48e297d85f8d" path="/var/lib/kubelet/pods/6654d39b-215e-4d8c-8e26-48e297d85f8d/volumes" Nov 8 00:31:06.573725 containerd[1513]: time="2025-11-08T00:31:06.573646943Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:06.575844 containerd[1513]: time="2025-11-08T00:31:06.575805729Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:31:06.576009 containerd[1513]: time="2025-11-08T00:31:06.575897771Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:31:06.576130 kubelet[2635]: E1108 00:31:06.576077 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:31:06.576207 kubelet[2635]: E1108 00:31:06.576135 2635 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:31:06.576353 kubelet[2635]: E1108 00:31:06.576281 2635 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dshgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-989fcc88-m4sv7_calico-system(b4091b12-a1d1-44f0-9f31-4fa77fdbf9a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:06.577771 kubelet[2635]: E1108 00:31:06.577720 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-989fcc88-m4sv7" podUID="b4091b12-a1d1-44f0-9f31-4fa77fdbf9a2" Nov 8 00:31:06.812617 kubelet[2635]: E1108 00:31:06.812381 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-989fcc88-m4sv7" podUID="b4091b12-a1d1-44f0-9f31-4fa77fdbf9a2" Nov 8 00:31:06.860818 systemd-networkd[1395]: vxlan.calico: Gained IPv6LL Nov 8 00:31:07.116805 systemd-networkd[1395]: calif994ada9fe4: Gained IPv6LL Nov 8 00:31:09.432014 containerd[1513]: time="2025-11-08T00:31:09.431935474Z" level=info msg="StopPodSandbox for \"d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c\"" Nov 8 00:31:09.573027 containerd[1513]: 2025-11-08 00:31:09.508 [INFO][4154] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" Nov 8 00:31:09.573027 containerd[1513]: 2025-11-08 00:31:09.508 [INFO][4154] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" iface="eth0" netns="/var/run/netns/cni-74ff8a3a-bfbe-5ff6-f1d3-c901a2a84957" Nov 8 00:31:09.573027 containerd[1513]: 2025-11-08 00:31:09.508 [INFO][4154] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" iface="eth0" netns="/var/run/netns/cni-74ff8a3a-bfbe-5ff6-f1d3-c901a2a84957" Nov 8 00:31:09.573027 containerd[1513]: 2025-11-08 00:31:09.508 [INFO][4154] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" iface="eth0" netns="/var/run/netns/cni-74ff8a3a-bfbe-5ff6-f1d3-c901a2a84957" Nov 8 00:31:09.573027 containerd[1513]: 2025-11-08 00:31:09.508 [INFO][4154] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" Nov 8 00:31:09.573027 containerd[1513]: 2025-11-08 00:31:09.508 [INFO][4154] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" Nov 8 00:31:09.573027 containerd[1513]: 2025-11-08 00:31:09.552 [INFO][4161] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" HandleID="k8s-pod-network.d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" Workload="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--lf6dp-eth0" Nov 8 00:31:09.573027 containerd[1513]: 2025-11-08 00:31:09.552 [INFO][4161] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:09.573027 containerd[1513]: 2025-11-08 00:31:09.552 [INFO][4161] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:09.573027 containerd[1513]: 2025-11-08 00:31:09.561 [WARNING][4161] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" HandleID="k8s-pod-network.d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" Workload="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--lf6dp-eth0" Nov 8 00:31:09.573027 containerd[1513]: 2025-11-08 00:31:09.562 [INFO][4161] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" HandleID="k8s-pod-network.d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" Workload="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--lf6dp-eth0" Nov 8 00:31:09.573027 containerd[1513]: 2025-11-08 00:31:09.565 [INFO][4161] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:09.573027 containerd[1513]: 2025-11-08 00:31:09.569 [INFO][4154] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" Nov 8 00:31:09.576676 containerd[1513]: time="2025-11-08T00:31:09.576600472Z" level=info msg="TearDown network for sandbox \"d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c\" successfully" Nov 8 00:31:09.576676 containerd[1513]: time="2025-11-08T00:31:09.576659201Z" level=info msg="StopPodSandbox for \"d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c\" returns successfully" Nov 8 00:31:09.578833 containerd[1513]: time="2025-11-08T00:31:09.578773777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lf6dp,Uid:0bf2f9f4-97c3-4ca2-a71b-3b614c8ed2e5,Namespace:kube-system,Attempt:1,}" Nov 8 00:31:09.580197 systemd[1]: run-netns-cni\x2d74ff8a3a\x2dbfbe\x2d5ff6\x2df1d3\x2dc901a2a84957.mount: Deactivated successfully. Nov 8 00:31:09.785590 systemd-networkd[1395]: calib6d4890cdca: Link UP Nov 8 00:31:09.785830 systemd-networkd[1395]: calib6d4890cdca: Gained carrier Nov 8 00:31:09.812321 containerd[1513]: 2025-11-08 00:31:09.670 [INFO][4172] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--lf6dp-eth0 coredns-674b8bbfcf- kube-system 0bf2f9f4-97c3-4ca2-a71b-3b614c8ed2e5 917 0 2025-11-08 00:30:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-dcea41702a coredns-674b8bbfcf-lf6dp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib6d4890cdca [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f6e37b6d5919aa65cb1d0a893994795946b9fc67bdc7a7fe28cfa0d7da7a86e5" Namespace="kube-system" Pod="coredns-674b8bbfcf-lf6dp" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--lf6dp-" Nov 8 00:31:09.812321 containerd[1513]: 2025-11-08 00:31:09.670 [INFO][4172] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f6e37b6d5919aa65cb1d0a893994795946b9fc67bdc7a7fe28cfa0d7da7a86e5" Namespace="kube-system" Pod="coredns-674b8bbfcf-lf6dp" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--lf6dp-eth0" Nov 8 00:31:09.812321 containerd[1513]: 2025-11-08 00:31:09.718 [INFO][4180] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f6e37b6d5919aa65cb1d0a893994795946b9fc67bdc7a7fe28cfa0d7da7a86e5" HandleID="k8s-pod-network.f6e37b6d5919aa65cb1d0a893994795946b9fc67bdc7a7fe28cfa0d7da7a86e5" Workload="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--lf6dp-eth0" Nov 8 00:31:09.812321 containerd[1513]: 2025-11-08 00:31:09.719 [INFO][4180] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f6e37b6d5919aa65cb1d0a893994795946b9fc67bdc7a7fe28cfa0d7da7a86e5" HandleID="k8s-pod-network.f6e37b6d5919aa65cb1d0a893994795946b9fc67bdc7a7fe28cfa0d7da7a86e5" Workload="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--lf6dp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f2a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-dcea41702a", "pod":"coredns-674b8bbfcf-lf6dp", "timestamp":"2025-11-08 00:31:09.718911699 +0000 UTC"}, Hostname:"ci-4081-3-6-n-dcea41702a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:31:09.812321 containerd[1513]: 2025-11-08 00:31:09.719 [INFO][4180] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:09.812321 containerd[1513]: 2025-11-08 00:31:09.719 [INFO][4180] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:09.812321 containerd[1513]: 2025-11-08 00:31:09.719 [INFO][4180] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-dcea41702a' Nov 8 00:31:09.812321 containerd[1513]: 2025-11-08 00:31:09.732 [INFO][4180] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f6e37b6d5919aa65cb1d0a893994795946b9fc67bdc7a7fe28cfa0d7da7a86e5" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:09.812321 containerd[1513]: 2025-11-08 00:31:09.742 [INFO][4180] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:09.812321 containerd[1513]: 2025-11-08 00:31:09.751 [INFO][4180] ipam/ipam.go 511: Trying affinity for 192.168.58.0/26 host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:09.812321 containerd[1513]: 2025-11-08 00:31:09.754 [INFO][4180] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.0/26 host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:09.812321 containerd[1513]: 2025-11-08 00:31:09.757 [INFO][4180] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.0/26 host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:09.812321 containerd[1513]: 2025-11-08 00:31:09.757 [INFO][4180] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.58.0/26 handle="k8s-pod-network.f6e37b6d5919aa65cb1d0a893994795946b9fc67bdc7a7fe28cfa0d7da7a86e5" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:09.812321 containerd[1513]: 2025-11-08 00:31:09.760 [INFO][4180] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f6e37b6d5919aa65cb1d0a893994795946b9fc67bdc7a7fe28cfa0d7da7a86e5 Nov 8 00:31:09.812321 containerd[1513]: 2025-11-08 00:31:09.768 [INFO][4180] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.58.0/26 handle="k8s-pod-network.f6e37b6d5919aa65cb1d0a893994795946b9fc67bdc7a7fe28cfa0d7da7a86e5" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:09.812321 containerd[1513]: 2025-11-08 00:31:09.778 [INFO][4180] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.58.2/26] block=192.168.58.0/26 handle="k8s-pod-network.f6e37b6d5919aa65cb1d0a893994795946b9fc67bdc7a7fe28cfa0d7da7a86e5" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:09.812321 containerd[1513]: 2025-11-08 00:31:09.778 [INFO][4180] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.2/26] handle="k8s-pod-network.f6e37b6d5919aa65cb1d0a893994795946b9fc67bdc7a7fe28cfa0d7da7a86e5" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:09.812321 containerd[1513]: 2025-11-08 00:31:09.778 [INFO][4180] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:09.812321 containerd[1513]: 2025-11-08 00:31:09.778 [INFO][4180] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.58.2/26] IPv6=[] ContainerID="f6e37b6d5919aa65cb1d0a893994795946b9fc67bdc7a7fe28cfa0d7da7a86e5" HandleID="k8s-pod-network.f6e37b6d5919aa65cb1d0a893994795946b9fc67bdc7a7fe28cfa0d7da7a86e5" Workload="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--lf6dp-eth0" Nov 8 00:31:09.814713 containerd[1513]: 2025-11-08 00:31:09.781 [INFO][4172] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f6e37b6d5919aa65cb1d0a893994795946b9fc67bdc7a7fe28cfa0d7da7a86e5" Namespace="kube-system" Pod="coredns-674b8bbfcf-lf6dp" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--lf6dp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--lf6dp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0bf2f9f4-97c3-4ca2-a71b-3b614c8ed2e5", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-dcea41702a", ContainerID:"", Pod:"coredns-674b8bbfcf-lf6dp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib6d4890cdca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:09.814713 containerd[1513]: 2025-11-08 00:31:09.782 [INFO][4172] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.2/32] ContainerID="f6e37b6d5919aa65cb1d0a893994795946b9fc67bdc7a7fe28cfa0d7da7a86e5" Namespace="kube-system" Pod="coredns-674b8bbfcf-lf6dp" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--lf6dp-eth0" Nov 8 00:31:09.814713 containerd[1513]: 2025-11-08 00:31:09.782 [INFO][4172] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6d4890cdca ContainerID="f6e37b6d5919aa65cb1d0a893994795946b9fc67bdc7a7fe28cfa0d7da7a86e5" Namespace="kube-system" Pod="coredns-674b8bbfcf-lf6dp" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--lf6dp-eth0" Nov 8 00:31:09.814713 containerd[1513]: 2025-11-08 00:31:09.786 [INFO][4172] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f6e37b6d5919aa65cb1d0a893994795946b9fc67bdc7a7fe28cfa0d7da7a86e5" Namespace="kube-system" Pod="coredns-674b8bbfcf-lf6dp" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--lf6dp-eth0" Nov 8 00:31:09.814713 containerd[1513]: 2025-11-08 00:31:09.788 [INFO][4172] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f6e37b6d5919aa65cb1d0a893994795946b9fc67bdc7a7fe28cfa0d7da7a86e5" Namespace="kube-system" Pod="coredns-674b8bbfcf-lf6dp" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--lf6dp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--lf6dp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0bf2f9f4-97c3-4ca2-a71b-3b614c8ed2e5", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-dcea41702a", ContainerID:"f6e37b6d5919aa65cb1d0a893994795946b9fc67bdc7a7fe28cfa0d7da7a86e5", Pod:"coredns-674b8bbfcf-lf6dp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib6d4890cdca", MAC:"6e:fe:84:19:f7:c8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:09.814713 containerd[1513]: 2025-11-08 00:31:09.803 [INFO][4172] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f6e37b6d5919aa65cb1d0a893994795946b9fc67bdc7a7fe28cfa0d7da7a86e5" Namespace="kube-system" Pod="coredns-674b8bbfcf-lf6dp" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--lf6dp-eth0" Nov 8 00:31:09.839087 containerd[1513]: time="2025-11-08T00:31:09.838946579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:09.844902 containerd[1513]: time="2025-11-08T00:31:09.841266798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:09.846184 containerd[1513]: time="2025-11-08T00:31:09.841324636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:09.851460 containerd[1513]: time="2025-11-08T00:31:09.851080256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:09.880541 systemd[1]: Started cri-containerd-f6e37b6d5919aa65cb1d0a893994795946b9fc67bdc7a7fe28cfa0d7da7a86e5.scope - libcontainer container f6e37b6d5919aa65cb1d0a893994795946b9fc67bdc7a7fe28cfa0d7da7a86e5. Nov 8 00:31:09.934190 containerd[1513]: time="2025-11-08T00:31:09.934122816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lf6dp,Uid:0bf2f9f4-97c3-4ca2-a71b-3b614c8ed2e5,Namespace:kube-system,Attempt:1,} returns sandbox id \"f6e37b6d5919aa65cb1d0a893994795946b9fc67bdc7a7fe28cfa0d7da7a86e5\"" Nov 8 00:31:09.940576 containerd[1513]: time="2025-11-08T00:31:09.940546213Z" level=info msg="CreateContainer within sandbox \"f6e37b6d5919aa65cb1d0a893994795946b9fc67bdc7a7fe28cfa0d7da7a86e5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:31:09.964887 containerd[1513]: time="2025-11-08T00:31:09.964832261Z" level=info msg="CreateContainer within sandbox \"f6e37b6d5919aa65cb1d0a893994795946b9fc67bdc7a7fe28cfa0d7da7a86e5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a66edc4808a1ff9c61becca3576d81840c5145fc23c8a5dd1bbbd7c66a802d4c\"" Nov 8 00:31:09.965768 containerd[1513]: time="2025-11-08T00:31:09.965502809Z" level=info msg="StartContainer for \"a66edc4808a1ff9c61becca3576d81840c5145fc23c8a5dd1bbbd7c66a802d4c\"" Nov 8 00:31:09.995832 systemd[1]: Started cri-containerd-a66edc4808a1ff9c61becca3576d81840c5145fc23c8a5dd1bbbd7c66a802d4c.scope - libcontainer container a66edc4808a1ff9c61becca3576d81840c5145fc23c8a5dd1bbbd7c66a802d4c. Nov 8 00:31:10.031243 containerd[1513]: time="2025-11-08T00:31:10.031092352Z" level=info msg="StartContainer for \"a66edc4808a1ff9c61becca3576d81840c5145fc23c8a5dd1bbbd7c66a802d4c\" returns successfully" Nov 8 00:31:10.435613 containerd[1513]: time="2025-11-08T00:31:10.435506735Z" level=info msg="StopPodSandbox for \"3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b\"" Nov 8 00:31:10.442625 containerd[1513]: time="2025-11-08T00:31:10.442552873Z" level=info msg="StopPodSandbox for \"254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5\"" Nov 8 00:31:10.443108 containerd[1513]: time="2025-11-08T00:31:10.443058504Z" level=info msg="StopPodSandbox for \"035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3\"" Nov 8 00:31:10.449593 containerd[1513]: time="2025-11-08T00:31:10.449531987Z" level=info msg="StopPodSandbox for \"1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23\"" Nov 8 00:31:10.451987 containerd[1513]: time="2025-11-08T00:31:10.451543020Z" level=info msg="StopPodSandbox for \"27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9\"" Nov 8 00:31:10.723980 containerd[1513]: 2025-11-08 00:31:10.614 [INFO][4339] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" Nov 8 00:31:10.723980 containerd[1513]: 2025-11-08 00:31:10.615 [INFO][4339] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" iface="eth0" netns="/var/run/netns/cni-313762a6-adc8-ccbf-c261-81029a0d9a7f" Nov 8 00:31:10.723980 containerd[1513]: 2025-11-08 00:31:10.615 [INFO][4339] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" iface="eth0" netns="/var/run/netns/cni-313762a6-adc8-ccbf-c261-81029a0d9a7f" Nov 8 00:31:10.723980 containerd[1513]: 2025-11-08 00:31:10.615 [INFO][4339] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" iface="eth0" netns="/var/run/netns/cni-313762a6-adc8-ccbf-c261-81029a0d9a7f" Nov 8 00:31:10.723980 containerd[1513]: 2025-11-08 00:31:10.615 [INFO][4339] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" Nov 8 00:31:10.723980 containerd[1513]: 2025-11-08 00:31:10.615 [INFO][4339] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" Nov 8 00:31:10.723980 containerd[1513]: 2025-11-08 00:31:10.664 [INFO][4356] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" HandleID="k8s-pod-network.27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--kube--controllers--5b858bf6c9--q5c9k-eth0" Nov 8 00:31:10.723980 containerd[1513]: 2025-11-08 00:31:10.671 [INFO][4356] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:10.723980 containerd[1513]: 2025-11-08 00:31:10.671 [INFO][4356] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:10.723980 containerd[1513]: 2025-11-08 00:31:10.689 [WARNING][4356] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" HandleID="k8s-pod-network.27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--kube--controllers--5b858bf6c9--q5c9k-eth0" Nov 8 00:31:10.723980 containerd[1513]: 2025-11-08 00:31:10.690 [INFO][4356] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" HandleID="k8s-pod-network.27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--kube--controllers--5b858bf6c9--q5c9k-eth0" Nov 8 00:31:10.723980 containerd[1513]: 2025-11-08 00:31:10.699 [INFO][4356] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:10.723980 containerd[1513]: 2025-11-08 00:31:10.715 [INFO][4339] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" Nov 8 00:31:10.731743 systemd[1]: run-netns-cni\x2d313762a6\x2dadc8\x2dccbf\x2dc261\x2d81029a0d9a7f.mount: Deactivated successfully. Nov 8 00:31:10.733115 containerd[1513]: time="2025-11-08T00:31:10.730484688Z" level=info msg="TearDown network for sandbox \"27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9\" successfully" Nov 8 00:31:10.733115 containerd[1513]: time="2025-11-08T00:31:10.731801188Z" level=info msg="StopPodSandbox for \"27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9\" returns successfully" Nov 8 00:31:10.735296 containerd[1513]: time="2025-11-08T00:31:10.734956824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b858bf6c9-q5c9k,Uid:7bfc75bc-86f9-445a-9d64-08b33fc703e4,Namespace:calico-system,Attempt:1,}" Nov 8 00:31:10.806520 containerd[1513]: 2025-11-08 00:31:10.605 [INFO][4323] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" Nov 8 00:31:10.806520 containerd[1513]: 2025-11-08 00:31:10.605 [INFO][4323] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" iface="eth0" netns="/var/run/netns/cni-283ce994-c255-9082-7536-a0c99c4d11a0" Nov 8 00:31:10.806520 containerd[1513]: 2025-11-08 00:31:10.606 [INFO][4323] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" iface="eth0" netns="/var/run/netns/cni-283ce994-c255-9082-7536-a0c99c4d11a0" Nov 8 00:31:10.806520 containerd[1513]: 2025-11-08 00:31:10.609 [INFO][4323] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" iface="eth0" netns="/var/run/netns/cni-283ce994-c255-9082-7536-a0c99c4d11a0" Nov 8 00:31:10.806520 containerd[1513]: 2025-11-08 00:31:10.609 [INFO][4323] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" Nov 8 00:31:10.806520 containerd[1513]: 2025-11-08 00:31:10.609 [INFO][4323] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" Nov 8 00:31:10.806520 containerd[1513]: 2025-11-08 00:31:10.717 [INFO][4354] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" HandleID="k8s-pod-network.254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" Workload="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--dcdpj-eth0" Nov 8 00:31:10.806520 containerd[1513]: 2025-11-08 00:31:10.717 [INFO][4354] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:10.806520 containerd[1513]: 2025-11-08 00:31:10.717 [INFO][4354] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:10.806520 containerd[1513]: 2025-11-08 00:31:10.747 [WARNING][4354] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" HandleID="k8s-pod-network.254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" Workload="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--dcdpj-eth0" Nov 8 00:31:10.806520 containerd[1513]: 2025-11-08 00:31:10.747 [INFO][4354] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" HandleID="k8s-pod-network.254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" Workload="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--dcdpj-eth0" Nov 8 00:31:10.806520 containerd[1513]: 2025-11-08 00:31:10.771 [INFO][4354] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:10.806520 containerd[1513]: 2025-11-08 00:31:10.793 [INFO][4323] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" Nov 8 00:31:10.808833 containerd[1513]: time="2025-11-08T00:31:10.808806909Z" level=info msg="TearDown network for sandbox \"254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5\" successfully" Nov 8 00:31:10.813035 systemd[1]: run-netns-cni\x2d283ce994\x2dc255\x2d9082\x2d7536\x2da0c99c4d11a0.mount: Deactivated successfully. Nov 8 00:31:10.816435 containerd[1513]: time="2025-11-08T00:31:10.814734023Z" level=info msg="StopPodSandbox for \"254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5\" returns successfully" Nov 8 00:31:10.816435 containerd[1513]: time="2025-11-08T00:31:10.816093464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dcdpj,Uid:0f765679-4f0d-4ea9-957a-c1950533f8b3,Namespace:kube-system,Attempt:1,}" Nov 8 00:31:10.818619 containerd[1513]: 2025-11-08 00:31:10.620 [INFO][4304] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" Nov 8 00:31:10.818619 containerd[1513]: 2025-11-08 00:31:10.620 [INFO][4304] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" iface="eth0" netns="/var/run/netns/cni-94af9adc-f0cd-b7f5-3290-8fd31a6992e4" Nov 8 00:31:10.818619 containerd[1513]: 2025-11-08 00:31:10.620 [INFO][4304] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" iface="eth0" netns="/var/run/netns/cni-94af9adc-f0cd-b7f5-3290-8fd31a6992e4" Nov 8 00:31:10.818619 containerd[1513]: 2025-11-08 00:31:10.621 [INFO][4304] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" iface="eth0" netns="/var/run/netns/cni-94af9adc-f0cd-b7f5-3290-8fd31a6992e4" Nov 8 00:31:10.818619 containerd[1513]: 2025-11-08 00:31:10.621 [INFO][4304] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" Nov 8 00:31:10.818619 containerd[1513]: 2025-11-08 00:31:10.621 [INFO][4304] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" Nov 8 00:31:10.818619 containerd[1513]: 2025-11-08 00:31:10.728 [INFO][4364] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" HandleID="k8s-pod-network.1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" Workload="ci--4081--3--6--n--dcea41702a-k8s-goldmane--666569f655--p4b9r-eth0" Nov 8 00:31:10.818619 containerd[1513]: 2025-11-08 00:31:10.729 [INFO][4364] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:10.818619 containerd[1513]: 2025-11-08 00:31:10.771 [INFO][4364] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:10.818619 containerd[1513]: 2025-11-08 00:31:10.792 [WARNING][4364] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" HandleID="k8s-pod-network.1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" Workload="ci--4081--3--6--n--dcea41702a-k8s-goldmane--666569f655--p4b9r-eth0" Nov 8 00:31:10.818619 containerd[1513]: 2025-11-08 00:31:10.792 [INFO][4364] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" HandleID="k8s-pod-network.1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" Workload="ci--4081--3--6--n--dcea41702a-k8s-goldmane--666569f655--p4b9r-eth0" Nov 8 00:31:10.818619 containerd[1513]: 2025-11-08 00:31:10.799 [INFO][4364] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:10.818619 containerd[1513]: 2025-11-08 00:31:10.805 [INFO][4304] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" Nov 8 00:31:10.818974 containerd[1513]: time="2025-11-08T00:31:10.818800324Z" level=info msg="TearDown network for sandbox \"1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23\" successfully" Nov 8 00:31:10.818974 containerd[1513]: time="2025-11-08T00:31:10.818820462Z" level=info msg="StopPodSandbox for \"1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23\" returns successfully" Nov 8 00:31:10.819816 containerd[1513]: time="2025-11-08T00:31:10.819784107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-p4b9r,Uid:8b84b572-8061-4813-b462-3eea6f974bdd,Namespace:calico-system,Attempt:1,}" Nov 8 00:31:10.901432 kubelet[2635]: I1108 00:31:10.900154 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-lf6dp" podStartSLOduration=37.900132409 podStartE2EDuration="37.900132409s" podCreationTimestamp="2025-11-08 00:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:31:10.899925023 +0000 UTC m=+42.638420897" watchObservedRunningTime="2025-11-08 00:31:10.900132409 +0000 UTC m=+42.638628283" Nov 8 00:31:10.907003 containerd[1513]: 2025-11-08 00:31:10.680 [INFO][4328] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" Nov 8 00:31:10.907003 containerd[1513]: 2025-11-08 00:31:10.681 [INFO][4328] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" iface="eth0" netns="/var/run/netns/cni-62385e04-def1-767e-c941-5a79b1169fa7" Nov 8 00:31:10.907003 containerd[1513]: 2025-11-08 00:31:10.681 [INFO][4328] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" iface="eth0" netns="/var/run/netns/cni-62385e04-def1-767e-c941-5a79b1169fa7" Nov 8 00:31:10.907003 containerd[1513]: 2025-11-08 00:31:10.683 [INFO][4328] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" iface="eth0" netns="/var/run/netns/cni-62385e04-def1-767e-c941-5a79b1169fa7" Nov 8 00:31:10.907003 containerd[1513]: 2025-11-08 00:31:10.683 [INFO][4328] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" Nov 8 00:31:10.907003 containerd[1513]: 2025-11-08 00:31:10.683 [INFO][4328] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" Nov 8 00:31:10.907003 containerd[1513]: 2025-11-08 00:31:10.813 [INFO][4375] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" HandleID="k8s-pod-network.3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" Workload="ci--4081--3--6--n--dcea41702a-k8s-csi--node--driver--m4hcx-eth0" Nov 8 00:31:10.907003 containerd[1513]: 2025-11-08 00:31:10.813 [INFO][4375] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:10.907003 containerd[1513]: 2025-11-08 00:31:10.813 [INFO][4375] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:10.907003 containerd[1513]: 2025-11-08 00:31:10.859 [WARNING][4375] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" HandleID="k8s-pod-network.3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" Workload="ci--4081--3--6--n--dcea41702a-k8s-csi--node--driver--m4hcx-eth0" Nov 8 00:31:10.907003 containerd[1513]: 2025-11-08 00:31:10.859 [INFO][4375] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" HandleID="k8s-pod-network.3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" Workload="ci--4081--3--6--n--dcea41702a-k8s-csi--node--driver--m4hcx-eth0" Nov 8 00:31:10.907003 containerd[1513]: 2025-11-08 00:31:10.879 [INFO][4375] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:10.907003 containerd[1513]: 2025-11-08 00:31:10.888 [INFO][4328] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" Nov 8 00:31:10.914737 containerd[1513]: time="2025-11-08T00:31:10.913205819Z" level=info msg="TearDown network for sandbox \"3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b\" successfully" Nov 8 00:31:10.914737 containerd[1513]: time="2025-11-08T00:31:10.913244211Z" level=info msg="StopPodSandbox for \"3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b\" returns successfully" Nov 8 00:31:10.918063 containerd[1513]: time="2025-11-08T00:31:10.917999694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m4hcx,Uid:7a730453-478d-46fd-915f-5cbf5e28b105,Namespace:calico-system,Attempt:1,}" Nov 8 00:31:10.954338 containerd[1513]: 2025-11-08 00:31:10.684 [INFO][4329] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" Nov 8 00:31:10.954338 containerd[1513]: 2025-11-08 00:31:10.684 [INFO][4329] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" iface="eth0" netns="/var/run/netns/cni-9ec22b50-32cf-720a-2642-2c19cebdd0bb" Nov 8 00:31:10.954338 containerd[1513]: 2025-11-08 00:31:10.685 [INFO][4329] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" iface="eth0" netns="/var/run/netns/cni-9ec22b50-32cf-720a-2642-2c19cebdd0bb" Nov 8 00:31:10.954338 containerd[1513]: 2025-11-08 00:31:10.685 [INFO][4329] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" iface="eth0" netns="/var/run/netns/cni-9ec22b50-32cf-720a-2642-2c19cebdd0bb" Nov 8 00:31:10.954338 containerd[1513]: 2025-11-08 00:31:10.686 [INFO][4329] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" Nov 8 00:31:10.954338 containerd[1513]: 2025-11-08 00:31:10.686 [INFO][4329] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" Nov 8 00:31:10.954338 containerd[1513]: 2025-11-08 00:31:10.843 [INFO][4380] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" HandleID="k8s-pod-network.035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--brv5c-eth0" Nov 8 00:31:10.954338 containerd[1513]: 2025-11-08 00:31:10.844 [INFO][4380] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:10.954338 containerd[1513]: 2025-11-08 00:31:10.883 [INFO][4380] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:10.954338 containerd[1513]: 2025-11-08 00:31:10.911 [WARNING][4380] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" HandleID="k8s-pod-network.035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--brv5c-eth0" Nov 8 00:31:10.954338 containerd[1513]: 2025-11-08 00:31:10.912 [INFO][4380] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" HandleID="k8s-pod-network.035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--brv5c-eth0" Nov 8 00:31:10.954338 containerd[1513]: 2025-11-08 00:31:10.927 [INFO][4380] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:10.954338 containerd[1513]: 2025-11-08 00:31:10.944 [INFO][4329] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" Nov 8 00:31:10.955424 containerd[1513]: time="2025-11-08T00:31:10.954995564Z" level=info msg="TearDown network for sandbox \"035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3\" successfully" Nov 8 00:31:10.955424 containerd[1513]: time="2025-11-08T00:31:10.955222327Z" level=info msg="StopPodSandbox for \"035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3\" returns successfully" Nov 8 00:31:10.957027 containerd[1513]: time="2025-11-08T00:31:10.956982463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64c9474b6f-brv5c,Uid:562850b7-c26f-461d-bfe8-c8a199e1bb8c,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:31:11.159143 systemd-networkd[1395]: cali52d3a186381: Link UP Nov 8 00:31:11.160565 systemd-networkd[1395]: cali52d3a186381: Gained carrier Nov 8 00:31:11.184702 containerd[1513]: 2025-11-08 00:31:10.949 [INFO][4389] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--dcea41702a-k8s-calico--kube--controllers--5b858bf6c9--q5c9k-eth0 calico-kube-controllers-5b858bf6c9- calico-system 7bfc75bc-86f9-445a-9d64-08b33fc703e4 934 0 2025-11-08 00:30:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5b858bf6c9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-n-dcea41702a calico-kube-controllers-5b858bf6c9-q5c9k eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali52d3a186381 [] [] }} ContainerID="1a2501867a869df5934ac6531a6d0fc87101af50d2a5c076592b199014706d57" Namespace="calico-system" Pod="calico-kube-controllers-5b858bf6c9-q5c9k" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-calico--kube--controllers--5b858bf6c9--q5c9k-" Nov 8 00:31:11.184702 containerd[1513]: 2025-11-08 00:31:10.954 [INFO][4389] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a2501867a869df5934ac6531a6d0fc87101af50d2a5c076592b199014706d57" Namespace="calico-system" Pod="calico-kube-controllers-5b858bf6c9-q5c9k" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-calico--kube--controllers--5b858bf6c9--q5c9k-eth0" Nov 8 00:31:11.184702 containerd[1513]: 2025-11-08 00:31:11.060 [INFO][4430] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a2501867a869df5934ac6531a6d0fc87101af50d2a5c076592b199014706d57" HandleID="k8s-pod-network.1a2501867a869df5934ac6531a6d0fc87101af50d2a5c076592b199014706d57" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--kube--controllers--5b858bf6c9--q5c9k-eth0" Nov 8 00:31:11.184702 containerd[1513]: 2025-11-08 00:31:11.061 [INFO][4430] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1a2501867a869df5934ac6531a6d0fc87101af50d2a5c076592b199014706d57" HandleID="k8s-pod-network.1a2501867a869df5934ac6531a6d0fc87101af50d2a5c076592b199014706d57" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--kube--controllers--5b858bf6c9--q5c9k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f620), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-dcea41702a", "pod":"calico-kube-controllers-5b858bf6c9-q5c9k", "timestamp":"2025-11-08 00:31:11.060762271 +0000 UTC"}, Hostname:"ci-4081-3-6-n-dcea41702a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:31:11.184702 containerd[1513]: 2025-11-08 00:31:11.061 [INFO][4430] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:11.184702 containerd[1513]: 2025-11-08 00:31:11.061 [INFO][4430] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:11.184702 containerd[1513]: 2025-11-08 00:31:11.062 [INFO][4430] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-dcea41702a' Nov 8 00:31:11.184702 containerd[1513]: 2025-11-08 00:31:11.083 [INFO][4430] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1a2501867a869df5934ac6531a6d0fc87101af50d2a5c076592b199014706d57" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.184702 containerd[1513]: 2025-11-08 00:31:11.090 [INFO][4430] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.184702 containerd[1513]: 2025-11-08 00:31:11.107 [INFO][4430] ipam/ipam.go 511: Trying affinity for 192.168.58.0/26 host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.184702 containerd[1513]: 2025-11-08 00:31:11.110 [INFO][4430] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.0/26 host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.184702 containerd[1513]: 2025-11-08 00:31:11.116 [INFO][4430] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.0/26 host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.184702 containerd[1513]: 2025-11-08 00:31:11.117 [INFO][4430] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.58.0/26 handle="k8s-pod-network.1a2501867a869df5934ac6531a6d0fc87101af50d2a5c076592b199014706d57" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.184702 containerd[1513]: 2025-11-08 00:31:11.123 [INFO][4430] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1a2501867a869df5934ac6531a6d0fc87101af50d2a5c076592b199014706d57 Nov 8 00:31:11.184702 containerd[1513]: 2025-11-08 00:31:11.133 [INFO][4430] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.58.0/26 handle="k8s-pod-network.1a2501867a869df5934ac6531a6d0fc87101af50d2a5c076592b199014706d57" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.184702 containerd[1513]: 2025-11-08 00:31:11.141 [INFO][4430] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.58.3/26] block=192.168.58.0/26 handle="k8s-pod-network.1a2501867a869df5934ac6531a6d0fc87101af50d2a5c076592b199014706d57" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.184702 containerd[1513]: 2025-11-08 00:31:11.141 [INFO][4430] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.3/26] handle="k8s-pod-network.1a2501867a869df5934ac6531a6d0fc87101af50d2a5c076592b199014706d57" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.184702 containerd[1513]: 2025-11-08 00:31:11.142 [INFO][4430] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:11.184702 containerd[1513]: 2025-11-08 00:31:11.142 [INFO][4430] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.58.3/26] IPv6=[] ContainerID="1a2501867a869df5934ac6531a6d0fc87101af50d2a5c076592b199014706d57" HandleID="k8s-pod-network.1a2501867a869df5934ac6531a6d0fc87101af50d2a5c076592b199014706d57" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--kube--controllers--5b858bf6c9--q5c9k-eth0" Nov 8 00:31:11.186259 containerd[1513]: 2025-11-08 00:31:11.144 [INFO][4389] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a2501867a869df5934ac6531a6d0fc87101af50d2a5c076592b199014706d57" Namespace="calico-system" Pod="calico-kube-controllers-5b858bf6c9-q5c9k" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-calico--kube--controllers--5b858bf6c9--q5c9k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--dcea41702a-k8s-calico--kube--controllers--5b858bf6c9--q5c9k-eth0", GenerateName:"calico-kube-controllers-5b858bf6c9-", Namespace:"calico-system", SelfLink:"", UID:"7bfc75bc-86f9-445a-9d64-08b33fc703e4", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b858bf6c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-dcea41702a", ContainerID:"", Pod:"calico-kube-controllers-5b858bf6c9-q5c9k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.58.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali52d3a186381", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:11.186259 containerd[1513]: 2025-11-08 00:31:11.145 [INFO][4389] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.3/32] ContainerID="1a2501867a869df5934ac6531a6d0fc87101af50d2a5c076592b199014706d57" Namespace="calico-system" Pod="calico-kube-controllers-5b858bf6c9-q5c9k" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-calico--kube--controllers--5b858bf6c9--q5c9k-eth0" Nov 8 00:31:11.186259 containerd[1513]: 2025-11-08 00:31:11.146 [INFO][4389] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali52d3a186381 ContainerID="1a2501867a869df5934ac6531a6d0fc87101af50d2a5c076592b199014706d57" Namespace="calico-system" Pod="calico-kube-controllers-5b858bf6c9-q5c9k" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-calico--kube--controllers--5b858bf6c9--q5c9k-eth0" Nov 8 00:31:11.186259 containerd[1513]: 2025-11-08 00:31:11.160 [INFO][4389] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a2501867a869df5934ac6531a6d0fc87101af50d2a5c076592b199014706d57" Namespace="calico-system" Pod="calico-kube-controllers-5b858bf6c9-q5c9k" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-calico--kube--controllers--5b858bf6c9--q5c9k-eth0" Nov 8 00:31:11.186259 containerd[1513]: 2025-11-08 00:31:11.163 [INFO][4389] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a2501867a869df5934ac6531a6d0fc87101af50d2a5c076592b199014706d57" Namespace="calico-system" Pod="calico-kube-controllers-5b858bf6c9-q5c9k" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-calico--kube--controllers--5b858bf6c9--q5c9k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--dcea41702a-k8s-calico--kube--controllers--5b858bf6c9--q5c9k-eth0", GenerateName:"calico-kube-controllers-5b858bf6c9-", Namespace:"calico-system", SelfLink:"", UID:"7bfc75bc-86f9-445a-9d64-08b33fc703e4", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b858bf6c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-dcea41702a", ContainerID:"1a2501867a869df5934ac6531a6d0fc87101af50d2a5c076592b199014706d57", Pod:"calico-kube-controllers-5b858bf6c9-q5c9k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.58.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali52d3a186381", MAC:"56:86:95:22:d8:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:11.186259 containerd[1513]: 2025-11-08 00:31:11.177 [INFO][4389] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a2501867a869df5934ac6531a6d0fc87101af50d2a5c076592b199014706d57" Namespace="calico-system" Pod="calico-kube-controllers-5b858bf6c9-q5c9k" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-calico--kube--controllers--5b858bf6c9--q5c9k-eth0" Nov 8 00:31:11.220525 containerd[1513]: time="2025-11-08T00:31:11.220259790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:11.220525 containerd[1513]: time="2025-11-08T00:31:11.220304122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:11.220525 containerd[1513]: time="2025-11-08T00:31:11.220365597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:11.221230 containerd[1513]: time="2025-11-08T00:31:11.220495319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:11.230576 systemd-networkd[1395]: cali8fe312b4e87: Link UP Nov 8 00:31:11.231875 systemd-networkd[1395]: cali8fe312b4e87: Gained carrier Nov 8 00:31:11.249784 systemd[1]: Started cri-containerd-1a2501867a869df5934ac6531a6d0fc87101af50d2a5c076592b199014706d57.scope - libcontainer container 1a2501867a869df5934ac6531a6d0fc87101af50d2a5c076592b199014706d57. Nov 8 00:31:11.251230 containerd[1513]: 2025-11-08 00:31:11.059 [INFO][4404] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--dcdpj-eth0 coredns-674b8bbfcf- kube-system 0f765679-4f0d-4ea9-957a-c1950533f8b3 932 0 2025-11-08 00:30:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-dcea41702a coredns-674b8bbfcf-dcdpj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8fe312b4e87 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1e90d5e53cb31b0a408fbd2a6766e1897bcf531270f1a0e3c132cfb6536fb3a3" Namespace="kube-system" Pod="coredns-674b8bbfcf-dcdpj" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--dcdpj-" Nov 8 00:31:11.251230 containerd[1513]: 2025-11-08 00:31:11.061 [INFO][4404] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1e90d5e53cb31b0a408fbd2a6766e1897bcf531270f1a0e3c132cfb6536fb3a3" Namespace="kube-system" Pod="coredns-674b8bbfcf-dcdpj" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--dcdpj-eth0" Nov 8 00:31:11.251230 containerd[1513]: 2025-11-08 00:31:11.103 [INFO][4470] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e90d5e53cb31b0a408fbd2a6766e1897bcf531270f1a0e3c132cfb6536fb3a3" HandleID="k8s-pod-network.1e90d5e53cb31b0a408fbd2a6766e1897bcf531270f1a0e3c132cfb6536fb3a3" Workload="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--dcdpj-eth0" Nov 8 00:31:11.251230 containerd[1513]: 2025-11-08 00:31:11.104 [INFO][4470] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1e90d5e53cb31b0a408fbd2a6766e1897bcf531270f1a0e3c132cfb6536fb3a3" HandleID="k8s-pod-network.1e90d5e53cb31b0a408fbd2a6766e1897bcf531270f1a0e3c132cfb6536fb3a3" Workload="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--dcdpj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-dcea41702a", "pod":"coredns-674b8bbfcf-dcdpj", "timestamp":"2025-11-08 00:31:11.103800586 +0000 UTC"}, Hostname:"ci-4081-3-6-n-dcea41702a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:31:11.251230 containerd[1513]: 2025-11-08 00:31:11.104 [INFO][4470] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:11.251230 containerd[1513]: 2025-11-08 00:31:11.142 [INFO][4470] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:11.251230 containerd[1513]: 2025-11-08 00:31:11.143 [INFO][4470] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-dcea41702a' Nov 8 00:31:11.251230 containerd[1513]: 2025-11-08 00:31:11.184 [INFO][4470] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1e90d5e53cb31b0a408fbd2a6766e1897bcf531270f1a0e3c132cfb6536fb3a3" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.251230 containerd[1513]: 2025-11-08 00:31:11.193 [INFO][4470] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.251230 containerd[1513]: 2025-11-08 00:31:11.202 [INFO][4470] ipam/ipam.go 511: Trying affinity for 192.168.58.0/26 host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.251230 containerd[1513]: 2025-11-08 00:31:11.203 [INFO][4470] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.0/26 host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.251230 containerd[1513]: 2025-11-08 00:31:11.207 [INFO][4470] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.0/26 host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.251230 containerd[1513]: 2025-11-08 00:31:11.207 [INFO][4470] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.58.0/26 handle="k8s-pod-network.1e90d5e53cb31b0a408fbd2a6766e1897bcf531270f1a0e3c132cfb6536fb3a3" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.251230 containerd[1513]: 2025-11-08 00:31:11.209 [INFO][4470] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1e90d5e53cb31b0a408fbd2a6766e1897bcf531270f1a0e3c132cfb6536fb3a3 Nov 8 00:31:11.251230 containerd[1513]: 2025-11-08 00:31:11.215 [INFO][4470] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.58.0/26 handle="k8s-pod-network.1e90d5e53cb31b0a408fbd2a6766e1897bcf531270f1a0e3c132cfb6536fb3a3" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.251230 containerd[1513]: 2025-11-08 00:31:11.221 [INFO][4470] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.58.4/26] block=192.168.58.0/26 handle="k8s-pod-network.1e90d5e53cb31b0a408fbd2a6766e1897bcf531270f1a0e3c132cfb6536fb3a3" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.251230 containerd[1513]: 2025-11-08 00:31:11.221 [INFO][4470] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.4/26] handle="k8s-pod-network.1e90d5e53cb31b0a408fbd2a6766e1897bcf531270f1a0e3c132cfb6536fb3a3" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.251230 containerd[1513]: 2025-11-08 00:31:11.222 [INFO][4470] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:11.251230 containerd[1513]: 2025-11-08 00:31:11.222 [INFO][4470] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.58.4/26] IPv6=[] ContainerID="1e90d5e53cb31b0a408fbd2a6766e1897bcf531270f1a0e3c132cfb6536fb3a3" HandleID="k8s-pod-network.1e90d5e53cb31b0a408fbd2a6766e1897bcf531270f1a0e3c132cfb6536fb3a3" Workload="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--dcdpj-eth0" Nov 8 00:31:11.253350 containerd[1513]: 2025-11-08 00:31:11.228 [INFO][4404] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1e90d5e53cb31b0a408fbd2a6766e1897bcf531270f1a0e3c132cfb6536fb3a3" Namespace="kube-system" Pod="coredns-674b8bbfcf-dcdpj" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--dcdpj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--dcdpj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0f765679-4f0d-4ea9-957a-c1950533f8b3", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-dcea41702a", ContainerID:"", Pod:"coredns-674b8bbfcf-dcdpj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8fe312b4e87", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:11.253350 containerd[1513]: 2025-11-08 00:31:11.228 [INFO][4404] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.4/32] ContainerID="1e90d5e53cb31b0a408fbd2a6766e1897bcf531270f1a0e3c132cfb6536fb3a3" Namespace="kube-system" Pod="coredns-674b8bbfcf-dcdpj" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--dcdpj-eth0" Nov 8 00:31:11.253350 containerd[1513]: 2025-11-08 00:31:11.228 [INFO][4404] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8fe312b4e87 ContainerID="1e90d5e53cb31b0a408fbd2a6766e1897bcf531270f1a0e3c132cfb6536fb3a3" Namespace="kube-system" Pod="coredns-674b8bbfcf-dcdpj" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--dcdpj-eth0" Nov 8 00:31:11.253350 containerd[1513]: 2025-11-08 00:31:11.231 [INFO][4404] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e90d5e53cb31b0a408fbd2a6766e1897bcf531270f1a0e3c132cfb6536fb3a3" Namespace="kube-system" Pod="coredns-674b8bbfcf-dcdpj" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--dcdpj-eth0" Nov 8 00:31:11.253350 containerd[1513]: 2025-11-08 00:31:11.232 [INFO][4404] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1e90d5e53cb31b0a408fbd2a6766e1897bcf531270f1a0e3c132cfb6536fb3a3" Namespace="kube-system" Pod="coredns-674b8bbfcf-dcdpj" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--dcdpj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--dcdpj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0f765679-4f0d-4ea9-957a-c1950533f8b3", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-dcea41702a", ContainerID:"1e90d5e53cb31b0a408fbd2a6766e1897bcf531270f1a0e3c132cfb6536fb3a3", Pod:"coredns-674b8bbfcf-dcdpj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8fe312b4e87", MAC:"96:b7:37:e8:ab:e9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:11.253350 containerd[1513]: 2025-11-08 00:31:11.247 [INFO][4404] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1e90d5e53cb31b0a408fbd2a6766e1897bcf531270f1a0e3c132cfb6536fb3a3" Namespace="kube-system" Pod="coredns-674b8bbfcf-dcdpj" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--dcdpj-eth0" Nov 8 00:31:11.276620 containerd[1513]: time="2025-11-08T00:31:11.276304995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:11.276620 containerd[1513]: time="2025-11-08T00:31:11.276501130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:11.276620 containerd[1513]: time="2025-11-08T00:31:11.276511909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:11.276849 containerd[1513]: time="2025-11-08T00:31:11.276661268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:11.295583 systemd[1]: Started cri-containerd-1e90d5e53cb31b0a408fbd2a6766e1897bcf531270f1a0e3c132cfb6536fb3a3.scope - libcontainer container 1e90d5e53cb31b0a408fbd2a6766e1897bcf531270f1a0e3c132cfb6536fb3a3. Nov 8 00:31:11.337045 containerd[1513]: time="2025-11-08T00:31:11.337009656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5b858bf6c9-q5c9k,Uid:7bfc75bc-86f9-445a-9d64-08b33fc703e4,Namespace:calico-system,Attempt:1,} returns sandbox id \"1a2501867a869df5934ac6531a6d0fc87101af50d2a5c076592b199014706d57\"" Nov 8 00:31:11.341754 containerd[1513]: time="2025-11-08T00:31:11.341721991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:31:11.363894 systemd-networkd[1395]: cali4f62a7351ff: Link UP Nov 8 00:31:11.365750 systemd-networkd[1395]: cali4f62a7351ff: Gained carrier Nov 8 00:31:11.379233 containerd[1513]: time="2025-11-08T00:31:11.379196655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dcdpj,Uid:0f765679-4f0d-4ea9-957a-c1950533f8b3,Namespace:kube-system,Attempt:1,} returns sandbox id \"1e90d5e53cb31b0a408fbd2a6766e1897bcf531270f1a0e3c132cfb6536fb3a3\"" Nov 8 00:31:11.392276 containerd[1513]: time="2025-11-08T00:31:11.391877678Z" level=info msg="CreateContainer within sandbox \"1e90d5e53cb31b0a408fbd2a6766e1897bcf531270f1a0e3c132cfb6536fb3a3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:31:11.410714 containerd[1513]: 2025-11-08 00:31:11.004 [INFO][4416] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--dcea41702a-k8s-goldmane--666569f655--p4b9r-eth0 goldmane-666569f655- calico-system 8b84b572-8061-4813-b462-3eea6f974bdd 933 0 2025-11-08 00:30:45 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-n-dcea41702a goldmane-666569f655-p4b9r eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4f62a7351ff [] [] }} ContainerID="09f967e40086090c315b9cc3de5e1734edbc52016c4ebe0d1de8efe71f4a5f9c" Namespace="calico-system" Pod="goldmane-666569f655-p4b9r" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-goldmane--666569f655--p4b9r-" Nov 8 00:31:11.410714 containerd[1513]: 2025-11-08 00:31:11.006 [INFO][4416] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="09f967e40086090c315b9cc3de5e1734edbc52016c4ebe0d1de8efe71f4a5f9c" Namespace="calico-system" Pod="goldmane-666569f655-p4b9r" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-goldmane--666569f655--p4b9r-eth0" Nov 8 00:31:11.410714 containerd[1513]: 2025-11-08 00:31:11.114 [INFO][4455] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="09f967e40086090c315b9cc3de5e1734edbc52016c4ebe0d1de8efe71f4a5f9c" HandleID="k8s-pod-network.09f967e40086090c315b9cc3de5e1734edbc52016c4ebe0d1de8efe71f4a5f9c" Workload="ci--4081--3--6--n--dcea41702a-k8s-goldmane--666569f655--p4b9r-eth0" Nov 8 00:31:11.410714 containerd[1513]: 2025-11-08 00:31:11.114 [INFO][4455] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="09f967e40086090c315b9cc3de5e1734edbc52016c4ebe0d1de8efe71f4a5f9c" HandleID="k8s-pod-network.09f967e40086090c315b9cc3de5e1734edbc52016c4ebe0d1de8efe71f4a5f9c" Workload="ci--4081--3--6--n--dcea41702a-k8s-goldmane--666569f655--p4b9r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bef00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-dcea41702a", "pod":"goldmane-666569f655-p4b9r", "timestamp":"2025-11-08 00:31:11.114093413 +0000 UTC"}, Hostname:"ci-4081-3-6-n-dcea41702a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:31:11.410714 containerd[1513]: 2025-11-08 00:31:11.114 [INFO][4455] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:11.410714 containerd[1513]: 2025-11-08 00:31:11.221 [INFO][4455] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:11.410714 containerd[1513]: 2025-11-08 00:31:11.221 [INFO][4455] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-dcea41702a' Nov 8 00:31:11.410714 containerd[1513]: 2025-11-08 00:31:11.284 [INFO][4455] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.09f967e40086090c315b9cc3de5e1734edbc52016c4ebe0d1de8efe71f4a5f9c" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.410714 containerd[1513]: 2025-11-08 00:31:11.293 [INFO][4455] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.410714 containerd[1513]: 2025-11-08 00:31:11.303 [INFO][4455] ipam/ipam.go 511: Trying affinity for 192.168.58.0/26 host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.410714 containerd[1513]: 2025-11-08 00:31:11.306 [INFO][4455] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.0/26 host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.410714 containerd[1513]: 2025-11-08 00:31:11.310 [INFO][4455] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.0/26 host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.410714 containerd[1513]: 2025-11-08 00:31:11.310 [INFO][4455] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.58.0/26 handle="k8s-pod-network.09f967e40086090c315b9cc3de5e1734edbc52016c4ebe0d1de8efe71f4a5f9c" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.410714 containerd[1513]: 2025-11-08 00:31:11.318 [INFO][4455] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.09f967e40086090c315b9cc3de5e1734edbc52016c4ebe0d1de8efe71f4a5f9c Nov 8 00:31:11.410714 containerd[1513]: 2025-11-08 00:31:11.330 [INFO][4455] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.58.0/26 handle="k8s-pod-network.09f967e40086090c315b9cc3de5e1734edbc52016c4ebe0d1de8efe71f4a5f9c" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.410714 containerd[1513]: 2025-11-08 00:31:11.342 [INFO][4455] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.58.5/26] block=192.168.58.0/26 handle="k8s-pod-network.09f967e40086090c315b9cc3de5e1734edbc52016c4ebe0d1de8efe71f4a5f9c" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.410714 containerd[1513]: 2025-11-08 00:31:11.343 [INFO][4455] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.5/26] handle="k8s-pod-network.09f967e40086090c315b9cc3de5e1734edbc52016c4ebe0d1de8efe71f4a5f9c" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.410714 containerd[1513]: 2025-11-08 00:31:11.343 [INFO][4455] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:11.410714 containerd[1513]: 2025-11-08 00:31:11.343 [INFO][4455] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.58.5/26] IPv6=[] ContainerID="09f967e40086090c315b9cc3de5e1734edbc52016c4ebe0d1de8efe71f4a5f9c" HandleID="k8s-pod-network.09f967e40086090c315b9cc3de5e1734edbc52016c4ebe0d1de8efe71f4a5f9c" Workload="ci--4081--3--6--n--dcea41702a-k8s-goldmane--666569f655--p4b9r-eth0" Nov 8 00:31:11.413894 containerd[1513]: 2025-11-08 00:31:11.356 [INFO][4416] cni-plugin/k8s.go 418: Populated endpoint ContainerID="09f967e40086090c315b9cc3de5e1734edbc52016c4ebe0d1de8efe71f4a5f9c" Namespace="calico-system" Pod="goldmane-666569f655-p4b9r" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-goldmane--666569f655--p4b9r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--dcea41702a-k8s-goldmane--666569f655--p4b9r-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"8b84b572-8061-4813-b462-3eea6f974bdd", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-dcea41702a", ContainerID:"", Pod:"goldmane-666569f655-p4b9r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.58.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4f62a7351ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:11.413894 containerd[1513]: 2025-11-08 00:31:11.357 [INFO][4416] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.5/32] ContainerID="09f967e40086090c315b9cc3de5e1734edbc52016c4ebe0d1de8efe71f4a5f9c" Namespace="calico-system" Pod="goldmane-666569f655-p4b9r" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-goldmane--666569f655--p4b9r-eth0" Nov 8 00:31:11.413894 containerd[1513]: 2025-11-08 00:31:11.357 [INFO][4416] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f62a7351ff ContainerID="09f967e40086090c315b9cc3de5e1734edbc52016c4ebe0d1de8efe71f4a5f9c" Namespace="calico-system" Pod="goldmane-666569f655-p4b9r" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-goldmane--666569f655--p4b9r-eth0" Nov 8 00:31:11.413894 containerd[1513]: 2025-11-08 00:31:11.367 [INFO][4416] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="09f967e40086090c315b9cc3de5e1734edbc52016c4ebe0d1de8efe71f4a5f9c" Namespace="calico-system" Pod="goldmane-666569f655-p4b9r" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-goldmane--666569f655--p4b9r-eth0" Nov 8 00:31:11.413894 containerd[1513]: 2025-11-08 00:31:11.368 [INFO][4416] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="09f967e40086090c315b9cc3de5e1734edbc52016c4ebe0d1de8efe71f4a5f9c" Namespace="calico-system" Pod="goldmane-666569f655-p4b9r" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-goldmane--666569f655--p4b9r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--dcea41702a-k8s-goldmane--666569f655--p4b9r-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"8b84b572-8061-4813-b462-3eea6f974bdd", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-dcea41702a", ContainerID:"09f967e40086090c315b9cc3de5e1734edbc52016c4ebe0d1de8efe71f4a5f9c", Pod:"goldmane-666569f655-p4b9r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.58.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4f62a7351ff", MAC:"96:ca:0e:5f:cd:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:11.413894 containerd[1513]: 2025-11-08 00:31:11.394 [INFO][4416] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="09f967e40086090c315b9cc3de5e1734edbc52016c4ebe0d1de8efe71f4a5f9c" Namespace="calico-system" Pod="goldmane-666569f655-p4b9r" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-goldmane--666569f655--p4b9r-eth0" Nov 8 00:31:11.442250 containerd[1513]: time="2025-11-08T00:31:11.441902462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:11.442250 containerd[1513]: time="2025-11-08T00:31:11.442055978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:11.442767 containerd[1513]: time="2025-11-08T00:31:11.442069603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:11.442767 containerd[1513]: time="2025-11-08T00:31:11.442544978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:11.460874 systemd-networkd[1395]: calia32d007b9de: Link UP Nov 8 00:31:11.468602 systemd[1]: Started cri-containerd-09f967e40086090c315b9cc3de5e1734edbc52016c4ebe0d1de8efe71f4a5f9c.scope - libcontainer container 09f967e40086090c315b9cc3de5e1734edbc52016c4ebe0d1de8efe71f4a5f9c. Nov 8 00:31:11.472553 systemd-networkd[1395]: calia32d007b9de: Gained carrier Nov 8 00:31:11.475062 containerd[1513]: time="2025-11-08T00:31:11.474921368Z" level=info msg="CreateContainer within sandbox \"1e90d5e53cb31b0a408fbd2a6766e1897bcf531270f1a0e3c132cfb6536fb3a3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f0c17ffb577c058cb5871527537373416fbc73b176a7908fa3e6c876743aaa35\"" Nov 8 00:31:11.476695 containerd[1513]: time="2025-11-08T00:31:11.476639598Z" level=info msg="StartContainer for \"f0c17ffb577c058cb5871527537373416fbc73b176a7908fa3e6c876743aaa35\"" Nov 8 00:31:11.499948 containerd[1513]: 2025-11-08 00:31:11.106 [INFO][4443] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--dcea41702a-k8s-csi--node--driver--m4hcx-eth0 csi-node-driver- calico-system 7a730453-478d-46fd-915f-5cbf5e28b105 935 0 2025-11-08 00:30:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-n-dcea41702a csi-node-driver-m4hcx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia32d007b9de [] [] }} ContainerID="0bc82b60d107b4e1bf400ff8eead068a67a2628943a7533a5fe374678ca0cb96" Namespace="calico-system" Pod="csi-node-driver-m4hcx" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-csi--node--driver--m4hcx-" Nov 8 00:31:11.499948 containerd[1513]: 2025-11-08 00:31:11.106 [INFO][4443] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0bc82b60d107b4e1bf400ff8eead068a67a2628943a7533a5fe374678ca0cb96" Namespace="calico-system" Pod="csi-node-driver-m4hcx" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-csi--node--driver--m4hcx-eth0" Nov 8 00:31:11.499948 containerd[1513]: 2025-11-08 00:31:11.171 [INFO][4483] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0bc82b60d107b4e1bf400ff8eead068a67a2628943a7533a5fe374678ca0cb96" HandleID="k8s-pod-network.0bc82b60d107b4e1bf400ff8eead068a67a2628943a7533a5fe374678ca0cb96" Workload="ci--4081--3--6--n--dcea41702a-k8s-csi--node--driver--m4hcx-eth0" Nov 8 00:31:11.499948 containerd[1513]: 2025-11-08 00:31:11.172 [INFO][4483] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0bc82b60d107b4e1bf400ff8eead068a67a2628943a7533a5fe374678ca0cb96" HandleID="k8s-pod-network.0bc82b60d107b4e1bf400ff8eead068a67a2628943a7533a5fe374678ca0cb96" Workload="ci--4081--3--6--n--dcea41702a-k8s-csi--node--driver--m4hcx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-dcea41702a", "pod":"csi-node-driver-m4hcx", "timestamp":"2025-11-08 00:31:11.171914845 +0000 UTC"}, Hostname:"ci-4081-3-6-n-dcea41702a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:31:11.499948 containerd[1513]: 2025-11-08 00:31:11.172 [INFO][4483] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:11.499948 containerd[1513]: 2025-11-08 00:31:11.343 [INFO][4483] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:11.499948 containerd[1513]: 2025-11-08 00:31:11.346 [INFO][4483] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-dcea41702a' Nov 8 00:31:11.499948 containerd[1513]: 2025-11-08 00:31:11.388 [INFO][4483] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0bc82b60d107b4e1bf400ff8eead068a67a2628943a7533a5fe374678ca0cb96" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.499948 containerd[1513]: 2025-11-08 00:31:11.420 [INFO][4483] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.499948 containerd[1513]: 2025-11-08 00:31:11.427 [INFO][4483] ipam/ipam.go 511: Trying affinity for 192.168.58.0/26 host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.499948 containerd[1513]: 2025-11-08 00:31:11.430 [INFO][4483] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.0/26 host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.499948 containerd[1513]: 2025-11-08 00:31:11.433 [INFO][4483] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.0/26 host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.499948 containerd[1513]: 2025-11-08 00:31:11.433 [INFO][4483] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.58.0/26 handle="k8s-pod-network.0bc82b60d107b4e1bf400ff8eead068a67a2628943a7533a5fe374678ca0cb96" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.499948 containerd[1513]: 2025-11-08 00:31:11.435 [INFO][4483] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0bc82b60d107b4e1bf400ff8eead068a67a2628943a7533a5fe374678ca0cb96 Nov 8 00:31:11.499948 containerd[1513]: 2025-11-08 00:31:11.441 [INFO][4483] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.58.0/26 handle="k8s-pod-network.0bc82b60d107b4e1bf400ff8eead068a67a2628943a7533a5fe374678ca0cb96" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.499948 containerd[1513]: 2025-11-08 00:31:11.454 [INFO][4483] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.58.6/26] block=192.168.58.0/26 handle="k8s-pod-network.0bc82b60d107b4e1bf400ff8eead068a67a2628943a7533a5fe374678ca0cb96" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.499948 containerd[1513]: 2025-11-08 00:31:11.454 [INFO][4483] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.6/26] handle="k8s-pod-network.0bc82b60d107b4e1bf400ff8eead068a67a2628943a7533a5fe374678ca0cb96" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.499948 containerd[1513]: 2025-11-08 00:31:11.454 [INFO][4483] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:11.499948 containerd[1513]: 2025-11-08 00:31:11.454 [INFO][4483] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.58.6/26] IPv6=[] ContainerID="0bc82b60d107b4e1bf400ff8eead068a67a2628943a7533a5fe374678ca0cb96" HandleID="k8s-pod-network.0bc82b60d107b4e1bf400ff8eead068a67a2628943a7533a5fe374678ca0cb96" Workload="ci--4081--3--6--n--dcea41702a-k8s-csi--node--driver--m4hcx-eth0" Nov 8 00:31:11.500813 containerd[1513]: 2025-11-08 00:31:11.458 [INFO][4443] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0bc82b60d107b4e1bf400ff8eead068a67a2628943a7533a5fe374678ca0cb96" Namespace="calico-system" Pod="csi-node-driver-m4hcx" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-csi--node--driver--m4hcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--dcea41702a-k8s-csi--node--driver--m4hcx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7a730453-478d-46fd-915f-5cbf5e28b105", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-dcea41702a", ContainerID:"", Pod:"csi-node-driver-m4hcx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.58.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia32d007b9de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:11.500813 containerd[1513]: 2025-11-08 00:31:11.458 [INFO][4443] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.6/32] ContainerID="0bc82b60d107b4e1bf400ff8eead068a67a2628943a7533a5fe374678ca0cb96" Namespace="calico-system" Pod="csi-node-driver-m4hcx" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-csi--node--driver--m4hcx-eth0" Nov 8 00:31:11.500813 containerd[1513]: 2025-11-08 00:31:11.458 [INFO][4443] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia32d007b9de ContainerID="0bc82b60d107b4e1bf400ff8eead068a67a2628943a7533a5fe374678ca0cb96" Namespace="calico-system" Pod="csi-node-driver-m4hcx" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-csi--node--driver--m4hcx-eth0" Nov 8 00:31:11.500813 containerd[1513]: 2025-11-08 00:31:11.474 [INFO][4443] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0bc82b60d107b4e1bf400ff8eead068a67a2628943a7533a5fe374678ca0cb96" Namespace="calico-system" Pod="csi-node-driver-m4hcx" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-csi--node--driver--m4hcx-eth0" Nov 8 00:31:11.500813 containerd[1513]: 2025-11-08 00:31:11.477 [INFO][4443] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0bc82b60d107b4e1bf400ff8eead068a67a2628943a7533a5fe374678ca0cb96" Namespace="calico-system" Pod="csi-node-driver-m4hcx" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-csi--node--driver--m4hcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--dcea41702a-k8s-csi--node--driver--m4hcx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7a730453-478d-46fd-915f-5cbf5e28b105", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-dcea41702a", ContainerID:"0bc82b60d107b4e1bf400ff8eead068a67a2628943a7533a5fe374678ca0cb96", Pod:"csi-node-driver-m4hcx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.58.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia32d007b9de", MAC:"a2:5b:ca:eb:32:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:11.500813 containerd[1513]: 2025-11-08 00:31:11.494 [INFO][4443] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0bc82b60d107b4e1bf400ff8eead068a67a2628943a7533a5fe374678ca0cb96" Namespace="calico-system" Pod="csi-node-driver-m4hcx" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-csi--node--driver--m4hcx-eth0" Nov 8 00:31:11.548446 systemd[1]: Started cri-containerd-f0c17ffb577c058cb5871527537373416fbc73b176a7908fa3e6c876743aaa35.scope - libcontainer container f0c17ffb577c058cb5871527537373416fbc73b176a7908fa3e6c876743aaa35. Nov 8 00:31:11.551589 containerd[1513]: time="2025-11-08T00:31:11.551149055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:11.551589 containerd[1513]: time="2025-11-08T00:31:11.551218004Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:11.551589 containerd[1513]: time="2025-11-08T00:31:11.551228813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:11.551589 containerd[1513]: time="2025-11-08T00:31:11.551309964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:11.594202 systemd[1]: run-netns-cni\x2d62385e04\x2ddef1\x2d767e\x2dc941\x2d5a79b1169fa7.mount: Deactivated successfully. Nov 8 00:31:11.594523 systemd[1]: run-netns-cni\x2d94af9adc\x2df0cd\x2db7f5\x2d3290\x2d8fd31a6992e4.mount: Deactivated successfully. Nov 8 00:31:11.594576 systemd[1]: run-netns-cni\x2d9ec22b50\x2d32cf\x2d720a\x2d2642\x2d2c19cebdd0bb.mount: Deactivated successfully. Nov 8 00:31:11.601494 systemd-networkd[1395]: cali9b0f37d1dd6: Link UP Nov 8 00:31:11.602827 systemd-networkd[1395]: cali9b0f37d1dd6: Gained carrier Nov 8 00:31:11.617628 systemd[1]: Started cri-containerd-0bc82b60d107b4e1bf400ff8eead068a67a2628943a7533a5fe374678ca0cb96.scope - libcontainer container 0bc82b60d107b4e1bf400ff8eead068a67a2628943a7533a5fe374678ca0cb96. Nov 8 00:31:11.626609 containerd[1513]: 2025-11-08 00:31:11.142 [INFO][4450] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--brv5c-eth0 calico-apiserver-64c9474b6f- calico-apiserver 562850b7-c26f-461d-bfe8-c8a199e1bb8c 936 0 2025-11-08 00:30:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64c9474b6f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-dcea41702a calico-apiserver-64c9474b6f-brv5c eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9b0f37d1dd6 [] [] }} ContainerID="13dd5c6f1e942c1868ac9d15c7e1af7033f411cbaa33df6dd5acb050f2aa4dd7" Namespace="calico-apiserver" Pod="calico-apiserver-64c9474b6f-brv5c" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--brv5c-" Nov 8 00:31:11.626609 containerd[1513]: 2025-11-08 00:31:11.143 [INFO][4450] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="13dd5c6f1e942c1868ac9d15c7e1af7033f411cbaa33df6dd5acb050f2aa4dd7" Namespace="calico-apiserver" Pod="calico-apiserver-64c9474b6f-brv5c" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--brv5c-eth0" Nov 8 00:31:11.626609 containerd[1513]: 2025-11-08 00:31:11.214 [INFO][4495] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="13dd5c6f1e942c1868ac9d15c7e1af7033f411cbaa33df6dd5acb050f2aa4dd7" HandleID="k8s-pod-network.13dd5c6f1e942c1868ac9d15c7e1af7033f411cbaa33df6dd5acb050f2aa4dd7" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--brv5c-eth0" Nov 8 00:31:11.626609 containerd[1513]: 2025-11-08 00:31:11.214 [INFO][4495] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="13dd5c6f1e942c1868ac9d15c7e1af7033f411cbaa33df6dd5acb050f2aa4dd7" HandleID="k8s-pod-network.13dd5c6f1e942c1868ac9d15c7e1af7033f411cbaa33df6dd5acb050f2aa4dd7" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--brv5c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad5b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-dcea41702a", "pod":"calico-apiserver-64c9474b6f-brv5c", "timestamp":"2025-11-08 00:31:11.214818928 +0000 UTC"}, Hostname:"ci-4081-3-6-n-dcea41702a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:31:11.626609 containerd[1513]: 2025-11-08 00:31:11.214 [INFO][4495] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:11.626609 containerd[1513]: 2025-11-08 00:31:11.454 [INFO][4495] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:11.626609 containerd[1513]: 2025-11-08 00:31:11.455 [INFO][4495] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-dcea41702a' Nov 8 00:31:11.626609 containerd[1513]: 2025-11-08 00:31:11.487 [INFO][4495] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.13dd5c6f1e942c1868ac9d15c7e1af7033f411cbaa33df6dd5acb050f2aa4dd7" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.626609 containerd[1513]: 2025-11-08 00:31:11.522 [INFO][4495] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.626609 containerd[1513]: 2025-11-08 00:31:11.532 [INFO][4495] ipam/ipam.go 511: Trying affinity for 192.168.58.0/26 host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.626609 containerd[1513]: 2025-11-08 00:31:11.537 [INFO][4495] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.0/26 host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.626609 containerd[1513]: 2025-11-08 00:31:11.540 [INFO][4495] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.0/26 host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.626609 containerd[1513]: 2025-11-08 00:31:11.540 [INFO][4495] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.58.0/26 handle="k8s-pod-network.13dd5c6f1e942c1868ac9d15c7e1af7033f411cbaa33df6dd5acb050f2aa4dd7" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.626609 containerd[1513]: 2025-11-08 00:31:11.542 [INFO][4495] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.13dd5c6f1e942c1868ac9d15c7e1af7033f411cbaa33df6dd5acb050f2aa4dd7 Nov 8 00:31:11.626609 containerd[1513]: 2025-11-08 00:31:11.551 [INFO][4495] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.58.0/26 handle="k8s-pod-network.13dd5c6f1e942c1868ac9d15c7e1af7033f411cbaa33df6dd5acb050f2aa4dd7" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.626609 containerd[1513]: 2025-11-08 00:31:11.565 [INFO][4495] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.58.7/26] block=192.168.58.0/26 handle="k8s-pod-network.13dd5c6f1e942c1868ac9d15c7e1af7033f411cbaa33df6dd5acb050f2aa4dd7" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.626609 containerd[1513]: 2025-11-08 00:31:11.565 [INFO][4495] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.7/26] handle="k8s-pod-network.13dd5c6f1e942c1868ac9d15c7e1af7033f411cbaa33df6dd5acb050f2aa4dd7" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:11.626609 containerd[1513]: 2025-11-08 00:31:11.565 [INFO][4495] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:11.626609 containerd[1513]: 2025-11-08 00:31:11.565 [INFO][4495] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.58.7/26] IPv6=[] ContainerID="13dd5c6f1e942c1868ac9d15c7e1af7033f411cbaa33df6dd5acb050f2aa4dd7" HandleID="k8s-pod-network.13dd5c6f1e942c1868ac9d15c7e1af7033f411cbaa33df6dd5acb050f2aa4dd7" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--brv5c-eth0" Nov 8 00:31:11.627122 containerd[1513]: 2025-11-08 00:31:11.571 [INFO][4450] cni-plugin/k8s.go 418: Populated endpoint ContainerID="13dd5c6f1e942c1868ac9d15c7e1af7033f411cbaa33df6dd5acb050f2aa4dd7" Namespace="calico-apiserver" Pod="calico-apiserver-64c9474b6f-brv5c" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--brv5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--brv5c-eth0", GenerateName:"calico-apiserver-64c9474b6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"562850b7-c26f-461d-bfe8-c8a199e1bb8c", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64c9474b6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-dcea41702a", ContainerID:"", Pod:"calico-apiserver-64c9474b6f-brv5c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b0f37d1dd6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:11.627122 containerd[1513]: 2025-11-08 00:31:11.571 [INFO][4450] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.7/32] ContainerID="13dd5c6f1e942c1868ac9d15c7e1af7033f411cbaa33df6dd5acb050f2aa4dd7" Namespace="calico-apiserver" Pod="calico-apiserver-64c9474b6f-brv5c" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--brv5c-eth0" Nov 8 00:31:11.627122 containerd[1513]: 2025-11-08 00:31:11.571 [INFO][4450] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b0f37d1dd6 ContainerID="13dd5c6f1e942c1868ac9d15c7e1af7033f411cbaa33df6dd5acb050f2aa4dd7" Namespace="calico-apiserver" Pod="calico-apiserver-64c9474b6f-brv5c" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--brv5c-eth0" Nov 8 00:31:11.627122 containerd[1513]: 2025-11-08 00:31:11.600 [INFO][4450] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="13dd5c6f1e942c1868ac9d15c7e1af7033f411cbaa33df6dd5acb050f2aa4dd7" Namespace="calico-apiserver" Pod="calico-apiserver-64c9474b6f-brv5c" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--brv5c-eth0" Nov 8 00:31:11.627122 containerd[1513]: 2025-11-08 00:31:11.607 [INFO][4450] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="13dd5c6f1e942c1868ac9d15c7e1af7033f411cbaa33df6dd5acb050f2aa4dd7" Namespace="calico-apiserver" Pod="calico-apiserver-64c9474b6f-brv5c" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--brv5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--brv5c-eth0", GenerateName:"calico-apiserver-64c9474b6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"562850b7-c26f-461d-bfe8-c8a199e1bb8c", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64c9474b6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-dcea41702a", ContainerID:"13dd5c6f1e942c1868ac9d15c7e1af7033f411cbaa33df6dd5acb050f2aa4dd7", Pod:"calico-apiserver-64c9474b6f-brv5c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b0f37d1dd6", MAC:"c6:6c:04:c5:7d:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:11.627122 containerd[1513]: 2025-11-08 00:31:11.616 [INFO][4450] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="13dd5c6f1e942c1868ac9d15c7e1af7033f411cbaa33df6dd5acb050f2aa4dd7" Namespace="calico-apiserver" Pod="calico-apiserver-64c9474b6f-brv5c" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--brv5c-eth0" Nov 8 00:31:11.650745 containerd[1513]: time="2025-11-08T00:31:11.650125965Z" level=info msg="StartContainer for \"f0c17ffb577c058cb5871527537373416fbc73b176a7908fa3e6c876743aaa35\" returns successfully" Nov 8 00:31:11.683473 containerd[1513]: time="2025-11-08T00:31:11.681012129Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:11.683473 containerd[1513]: time="2025-11-08T00:31:11.681058917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:11.683473 containerd[1513]: time="2025-11-08T00:31:11.681072723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:11.683473 containerd[1513]: time="2025-11-08T00:31:11.681132093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:11.702444 containerd[1513]: time="2025-11-08T00:31:11.701920474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-p4b9r,Uid:8b84b572-8061-4813-b462-3eea6f974bdd,Namespace:calico-system,Attempt:1,} returns sandbox id \"09f967e40086090c315b9cc3de5e1734edbc52016c4ebe0d1de8efe71f4a5f9c\"" Nov 8 00:31:11.716095 systemd[1]: run-containerd-runc-k8s.io-13dd5c6f1e942c1868ac9d15c7e1af7033f411cbaa33df6dd5acb050f2aa4dd7-runc.0WOaZr.mount: Deactivated successfully. Nov 8 00:31:11.723606 systemd[1]: Started cri-containerd-13dd5c6f1e942c1868ac9d15c7e1af7033f411cbaa33df6dd5acb050f2aa4dd7.scope - libcontainer container 13dd5c6f1e942c1868ac9d15c7e1af7033f411cbaa33df6dd5acb050f2aa4dd7. Nov 8 00:31:11.725035 systemd-networkd[1395]: calib6d4890cdca: Gained IPv6LL Nov 8 00:31:11.731583 containerd[1513]: time="2025-11-08T00:31:11.730117441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m4hcx,Uid:7a730453-478d-46fd-915f-5cbf5e28b105,Namespace:calico-system,Attempt:1,} returns sandbox id \"0bc82b60d107b4e1bf400ff8eead068a67a2628943a7533a5fe374678ca0cb96\"" Nov 8 00:31:11.787493 containerd[1513]: time="2025-11-08T00:31:11.787457667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64c9474b6f-brv5c,Uid:562850b7-c26f-461d-bfe8-c8a199e1bb8c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"13dd5c6f1e942c1868ac9d15c7e1af7033f411cbaa33df6dd5acb050f2aa4dd7\"" Nov 8 00:31:11.799971 containerd[1513]: time="2025-11-08T00:31:11.799769083Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:11.801430 containerd[1513]: time="2025-11-08T00:31:11.801275598Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:31:11.801430 containerd[1513]: time="2025-11-08T00:31:11.801337152Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:31:11.801729 kubelet[2635]: E1108 00:31:11.801659 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:31:11.801729 kubelet[2635]: E1108 00:31:11.801712 2635 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:31:11.802254 containerd[1513]: time="2025-11-08T00:31:11.802115372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:31:11.803691 kubelet[2635]: E1108 00:31:11.803489 2635 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n49w7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5b858bf6c9-q5c9k_calico-system(7bfc75bc-86f9-445a-9d64-08b33fc703e4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:11.806248 kubelet[2635]: E1108 00:31:11.804671 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b858bf6c9-q5c9k" podUID="7bfc75bc-86f9-445a-9d64-08b33fc703e4" Nov 8 00:31:11.853769 kubelet[2635]: E1108 00:31:11.853704 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b858bf6c9-q5c9k" podUID="7bfc75bc-86f9-445a-9d64-08b33fc703e4" Nov 8 00:31:12.264430 containerd[1513]: time="2025-11-08T00:31:12.264280468Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:12.266710 containerd[1513]: time="2025-11-08T00:31:12.266501685Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:31:12.266710 containerd[1513]: time="2025-11-08T00:31:12.266630244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:31:12.266986 kubelet[2635]: E1108 00:31:12.266907 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:31:12.267511 kubelet[2635]: E1108 00:31:12.266987 2635 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:31:12.267511 kubelet[2635]: E1108 00:31:12.267334 2635 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-595cr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-p4b9r_calico-system(8b84b572-8061-4813-b462-3eea6f974bdd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:12.268766 containerd[1513]: time="2025-11-08T00:31:12.268262115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:31:12.268853 kubelet[2635]: E1108 00:31:12.268691 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p4b9r" podUID="8b84b572-8061-4813-b462-3eea6f974bdd" Nov 8 00:31:12.301546 systemd-networkd[1395]: cali52d3a186381: Gained IPv6LL Nov 8 00:31:12.445940 containerd[1513]: time="2025-11-08T00:31:12.444177682Z" level=info msg="StopPodSandbox for \"26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190\"" Nov 8 00:31:12.535484 kubelet[2635]: I1108 00:31:12.534829 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dcdpj" podStartSLOduration=39.534381626 podStartE2EDuration="39.534381626s" podCreationTimestamp="2025-11-08 00:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:31:11.892903213 +0000 UTC m=+43.631399105" watchObservedRunningTime="2025-11-08 00:31:12.534381626 +0000 UTC m=+44.272877519" Nov 8 00:31:12.581004 containerd[1513]: 2025-11-08 00:31:12.529 [INFO][4801] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" Nov 8 00:31:12.581004 containerd[1513]: 2025-11-08 00:31:12.530 [INFO][4801] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" iface="eth0" netns="/var/run/netns/cni-f271ad86-3380-a19e-39b5-cc3af43a9750" Nov 8 00:31:12.581004 containerd[1513]: 2025-11-08 00:31:12.532 [INFO][4801] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" iface="eth0" netns="/var/run/netns/cni-f271ad86-3380-a19e-39b5-cc3af43a9750" Nov 8 00:31:12.581004 containerd[1513]: 2025-11-08 00:31:12.532 [INFO][4801] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" iface="eth0" netns="/var/run/netns/cni-f271ad86-3380-a19e-39b5-cc3af43a9750" Nov 8 00:31:12.581004 containerd[1513]: 2025-11-08 00:31:12.532 [INFO][4801] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" Nov 8 00:31:12.581004 containerd[1513]: 2025-11-08 00:31:12.532 [INFO][4801] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" Nov 8 00:31:12.581004 containerd[1513]: 2025-11-08 00:31:12.560 [INFO][4808] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" HandleID="k8s-pod-network.26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--jbmnh-eth0" Nov 8 00:31:12.581004 containerd[1513]: 2025-11-08 00:31:12.561 [INFO][4808] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:12.581004 containerd[1513]: 2025-11-08 00:31:12.561 [INFO][4808] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:12.581004 containerd[1513]: 2025-11-08 00:31:12.571 [WARNING][4808] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" HandleID="k8s-pod-network.26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--jbmnh-eth0" Nov 8 00:31:12.581004 containerd[1513]: 2025-11-08 00:31:12.571 [INFO][4808] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" HandleID="k8s-pod-network.26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--jbmnh-eth0" Nov 8 00:31:12.581004 containerd[1513]: 2025-11-08 00:31:12.574 [INFO][4808] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:12.581004 containerd[1513]: 2025-11-08 00:31:12.578 [INFO][4801] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" Nov 8 00:31:12.587518 containerd[1513]: time="2025-11-08T00:31:12.585395798Z" level=info msg="TearDown network for sandbox \"26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190\" successfully" Nov 8 00:31:12.587518 containerd[1513]: time="2025-11-08T00:31:12.585468123Z" level=info msg="StopPodSandbox for \"26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190\" returns successfully" Nov 8 00:31:12.589331 systemd[1]: run-netns-cni\x2df271ad86\x2d3380\x2da19e\x2d39b5\x2dcc3af43a9750.mount: Deactivated successfully. Nov 8 00:31:12.589712 containerd[1513]: time="2025-11-08T00:31:12.589690027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64c9474b6f-jbmnh,Uid:a375dcd8-3fbe-482e-815c-13c9165d26d2,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:31:12.712277 systemd-networkd[1395]: calie87a9e69999: Link UP Nov 8 00:31:12.713858 systemd-networkd[1395]: calie87a9e69999: Gained carrier Nov 8 00:31:12.723373 containerd[1513]: time="2025-11-08T00:31:12.723283121Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:12.743651 containerd[1513]: time="2025-11-08T00:31:12.741488107Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:31:12.743651 containerd[1513]: time="2025-11-08T00:31:12.741630993Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:31:12.743651 containerd[1513]: time="2025-11-08T00:31:12.743482170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:31:12.743832 kubelet[2635]: E1108 00:31:12.741837 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:31:12.743832 kubelet[2635]: E1108 00:31:12.741886 2635 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:31:12.743832 kubelet[2635]: E1108 00:31:12.743295 2635 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4m2rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-m4hcx_calico-system(7a730453-478d-46fd-915f-5cbf5e28b105): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:12.746617 containerd[1513]: 2025-11-08 00:31:12.646 [INFO][4815] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--jbmnh-eth0 calico-apiserver-64c9474b6f- calico-apiserver a375dcd8-3fbe-482e-815c-13c9165d26d2 985 0 2025-11-08 00:30:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64c9474b6f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-dcea41702a calico-apiserver-64c9474b6f-jbmnh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie87a9e69999 [] [] }} ContainerID="26798eaa6a159e0c4fca6fe16ea67e8df4b37a74f276769fcddf8b74a5214a8b" Namespace="calico-apiserver" Pod="calico-apiserver-64c9474b6f-jbmnh" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--jbmnh-" Nov 8 00:31:12.746617 containerd[1513]: 2025-11-08 00:31:12.646 [INFO][4815] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="26798eaa6a159e0c4fca6fe16ea67e8df4b37a74f276769fcddf8b74a5214a8b" Namespace="calico-apiserver" Pod="calico-apiserver-64c9474b6f-jbmnh" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--jbmnh-eth0" Nov 8 00:31:12.746617 containerd[1513]: 2025-11-08 00:31:12.671 [INFO][4826] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="26798eaa6a159e0c4fca6fe16ea67e8df4b37a74f276769fcddf8b74a5214a8b" HandleID="k8s-pod-network.26798eaa6a159e0c4fca6fe16ea67e8df4b37a74f276769fcddf8b74a5214a8b" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--jbmnh-eth0" Nov 8 00:31:12.746617 containerd[1513]: 2025-11-08 00:31:12.672 [INFO][4826] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="26798eaa6a159e0c4fca6fe16ea67e8df4b37a74f276769fcddf8b74a5214a8b" HandleID="k8s-pod-network.26798eaa6a159e0c4fca6fe16ea67e8df4b37a74f276769fcddf8b74a5214a8b" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--jbmnh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000320660), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-dcea41702a", "pod":"calico-apiserver-64c9474b6f-jbmnh", "timestamp":"2025-11-08 00:31:12.671965663 +0000 UTC"}, Hostname:"ci-4081-3-6-n-dcea41702a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:31:12.746617 containerd[1513]: 2025-11-08 00:31:12.672 [INFO][4826] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:12.746617 containerd[1513]: 2025-11-08 00:31:12.672 [INFO][4826] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:12.746617 containerd[1513]: 2025-11-08 00:31:12.672 [INFO][4826] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-dcea41702a' Nov 8 00:31:12.746617 containerd[1513]: 2025-11-08 00:31:12.678 [INFO][4826] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.26798eaa6a159e0c4fca6fe16ea67e8df4b37a74f276769fcddf8b74a5214a8b" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:12.746617 containerd[1513]: 2025-11-08 00:31:12.682 [INFO][4826] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:12.746617 containerd[1513]: 2025-11-08 00:31:12.685 [INFO][4826] ipam/ipam.go 511: Trying affinity for 192.168.58.0/26 host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:12.746617 containerd[1513]: 2025-11-08 00:31:12.687 [INFO][4826] ipam/ipam.go 158: Attempting to load block cidr=192.168.58.0/26 host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:12.746617 containerd[1513]: 2025-11-08 00:31:12.690 [INFO][4826] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.58.0/26 host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:12.746617 containerd[1513]: 2025-11-08 00:31:12.690 [INFO][4826] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.58.0/26 handle="k8s-pod-network.26798eaa6a159e0c4fca6fe16ea67e8df4b37a74f276769fcddf8b74a5214a8b" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:12.746617 containerd[1513]: 2025-11-08 00:31:12.691 [INFO][4826] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.26798eaa6a159e0c4fca6fe16ea67e8df4b37a74f276769fcddf8b74a5214a8b Nov 8 00:31:12.746617 containerd[1513]: 2025-11-08 00:31:12.695 [INFO][4826] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.58.0/26 handle="k8s-pod-network.26798eaa6a159e0c4fca6fe16ea67e8df4b37a74f276769fcddf8b74a5214a8b" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:12.746617 containerd[1513]: 2025-11-08 00:31:12.703 [INFO][4826] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.58.8/26] block=192.168.58.0/26 handle="k8s-pod-network.26798eaa6a159e0c4fca6fe16ea67e8df4b37a74f276769fcddf8b74a5214a8b" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:12.746617 containerd[1513]: 2025-11-08 00:31:12.704 [INFO][4826] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.58.8/26] handle="k8s-pod-network.26798eaa6a159e0c4fca6fe16ea67e8df4b37a74f276769fcddf8b74a5214a8b" host="ci-4081-3-6-n-dcea41702a" Nov 8 00:31:12.746617 containerd[1513]: 2025-11-08 00:31:12.704 [INFO][4826] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:12.746617 containerd[1513]: 2025-11-08 00:31:12.704 [INFO][4826] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.58.8/26] IPv6=[] ContainerID="26798eaa6a159e0c4fca6fe16ea67e8df4b37a74f276769fcddf8b74a5214a8b" HandleID="k8s-pod-network.26798eaa6a159e0c4fca6fe16ea67e8df4b37a74f276769fcddf8b74a5214a8b" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--jbmnh-eth0" Nov 8 00:31:12.748545 containerd[1513]: 2025-11-08 00:31:12.708 [INFO][4815] cni-plugin/k8s.go 418: Populated endpoint ContainerID="26798eaa6a159e0c4fca6fe16ea67e8df4b37a74f276769fcddf8b74a5214a8b" Namespace="calico-apiserver" Pod="calico-apiserver-64c9474b6f-jbmnh" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--jbmnh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--jbmnh-eth0", GenerateName:"calico-apiserver-64c9474b6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"a375dcd8-3fbe-482e-815c-13c9165d26d2", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64c9474b6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-dcea41702a", ContainerID:"", Pod:"calico-apiserver-64c9474b6f-jbmnh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie87a9e69999", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:12.748545 containerd[1513]: 2025-11-08 00:31:12.708 [INFO][4815] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.8/32] ContainerID="26798eaa6a159e0c4fca6fe16ea67e8df4b37a74f276769fcddf8b74a5214a8b" Namespace="calico-apiserver" Pod="calico-apiserver-64c9474b6f-jbmnh" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--jbmnh-eth0" Nov 8 00:31:12.748545 containerd[1513]: 2025-11-08 00:31:12.708 [INFO][4815] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie87a9e69999 ContainerID="26798eaa6a159e0c4fca6fe16ea67e8df4b37a74f276769fcddf8b74a5214a8b" Namespace="calico-apiserver" Pod="calico-apiserver-64c9474b6f-jbmnh" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--jbmnh-eth0" Nov 8 00:31:12.748545 containerd[1513]: 2025-11-08 00:31:12.711 [INFO][4815] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="26798eaa6a159e0c4fca6fe16ea67e8df4b37a74f276769fcddf8b74a5214a8b" Namespace="calico-apiserver" Pod="calico-apiserver-64c9474b6f-jbmnh" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--jbmnh-eth0" Nov 8 00:31:12.748545 containerd[1513]: 2025-11-08 00:31:12.711 [INFO][4815] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="26798eaa6a159e0c4fca6fe16ea67e8df4b37a74f276769fcddf8b74a5214a8b" Namespace="calico-apiserver" Pod="calico-apiserver-64c9474b6f-jbmnh" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--jbmnh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--jbmnh-eth0", GenerateName:"calico-apiserver-64c9474b6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"a375dcd8-3fbe-482e-815c-13c9165d26d2", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64c9474b6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-dcea41702a", ContainerID:"26798eaa6a159e0c4fca6fe16ea67e8df4b37a74f276769fcddf8b74a5214a8b", Pod:"calico-apiserver-64c9474b6f-jbmnh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie87a9e69999", MAC:"4e:1a:36:36:ec:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:12.748545 containerd[1513]: 2025-11-08 00:31:12.740 [INFO][4815] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="26798eaa6a159e0c4fca6fe16ea67e8df4b37a74f276769fcddf8b74a5214a8b" Namespace="calico-apiserver" Pod="calico-apiserver-64c9474b6f-jbmnh" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--jbmnh-eth0" Nov 8 00:31:12.749555 systemd-networkd[1395]: cali4f62a7351ff: Gained IPv6LL Nov 8 00:31:12.768814 containerd[1513]: time="2025-11-08T00:31:12.768584643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:12.768814 containerd[1513]: time="2025-11-08T00:31:12.768642180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:12.768814 containerd[1513]: time="2025-11-08T00:31:12.768655265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:12.768814 containerd[1513]: time="2025-11-08T00:31:12.768715156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:12.794623 systemd[1]: Started cri-containerd-26798eaa6a159e0c4fca6fe16ea67e8df4b37a74f276769fcddf8b74a5214a8b.scope - libcontainer container 26798eaa6a159e0c4fca6fe16ea67e8df4b37a74f276769fcddf8b74a5214a8b. Nov 8 00:31:12.814013 systemd-networkd[1395]: calia32d007b9de: Gained IPv6LL Nov 8 00:31:12.832531 containerd[1513]: time="2025-11-08T00:31:12.832464237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64c9474b6f-jbmnh,Uid:a375dcd8-3fbe-482e-815c-13c9165d26d2,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"26798eaa6a159e0c4fca6fe16ea67e8df4b37a74f276769fcddf8b74a5214a8b\"" Nov 8 00:31:12.874898 kubelet[2635]: E1108 00:31:12.874855 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p4b9r" podUID="8b84b572-8061-4813-b462-3eea6f974bdd" Nov 8 00:31:12.875786 kubelet[2635]: E1108 00:31:12.875675 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b858bf6c9-q5c9k" podUID="7bfc75bc-86f9-445a-9d64-08b33fc703e4" Nov 8 00:31:12.876712 systemd-networkd[1395]: cali9b0f37d1dd6: Gained IPv6LL Nov 8 00:31:13.170930 containerd[1513]: time="2025-11-08T00:31:13.170720796Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:13.173294 containerd[1513]: time="2025-11-08T00:31:13.173170119Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:31:13.173532 containerd[1513]: time="2025-11-08T00:31:13.173313917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:31:13.174193 kubelet[2635]: E1108 00:31:13.173657 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:31:13.174193 kubelet[2635]: E1108 00:31:13.173736 2635 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:31:13.174193 kubelet[2635]: E1108 00:31:13.174064 2635 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlzmd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64c9474b6f-brv5c_calico-apiserver(562850b7-c26f-461d-bfe8-c8a199e1bb8c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:13.176553 kubelet[2635]: E1108 00:31:13.176338 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c9474b6f-brv5c" podUID="562850b7-c26f-461d-bfe8-c8a199e1bb8c" Nov 8 00:31:13.176714 containerd[1513]: time="2025-11-08T00:31:13.175181015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:31:13.198727 systemd-networkd[1395]: cali8fe312b4e87: Gained IPv6LL Nov 8 00:31:13.630276 containerd[1513]: time="2025-11-08T00:31:13.630158644Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:13.632954 containerd[1513]: time="2025-11-08T00:31:13.632755963Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:31:13.632954 containerd[1513]: time="2025-11-08T00:31:13.632871418Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:31:13.633162 kubelet[2635]: E1108 00:31:13.633104 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:31:13.633690 kubelet[2635]: E1108 00:31:13.633184 2635 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:31:13.633690 kubelet[2635]: E1108 00:31:13.633586 2635 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4m2rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-m4hcx_calico-system(7a730453-478d-46fd-915f-5cbf5e28b105): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:13.634726 containerd[1513]: time="2025-11-08T00:31:13.634661232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:31:13.635303 kubelet[2635]: E1108 00:31:13.635208 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m4hcx" podUID="7a730453-478d-46fd-915f-5cbf5e28b105" Nov 8 00:31:13.878872 kubelet[2635]: E1108 00:31:13.878442 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c9474b6f-brv5c" podUID="562850b7-c26f-461d-bfe8-c8a199e1bb8c" Nov 8 00:31:13.880891 kubelet[2635]: E1108 00:31:13.880204 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m4hcx" podUID="7a730453-478d-46fd-915f-5cbf5e28b105" Nov 8 00:31:14.030217 systemd-networkd[1395]: calie87a9e69999: Gained IPv6LL Nov 8 00:31:14.079165 containerd[1513]: time="2025-11-08T00:31:14.078870005Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:14.081081 containerd[1513]: time="2025-11-08T00:31:14.081004041Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:31:14.081185 containerd[1513]: time="2025-11-08T00:31:14.081129726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:31:14.081542 kubelet[2635]: E1108 00:31:14.081473 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:31:14.081656 kubelet[2635]: E1108 00:31:14.081549 2635 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:31:14.081924 kubelet[2635]: E1108 00:31:14.081775 2635 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-49q75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64c9474b6f-jbmnh_calico-apiserver(a375dcd8-3fbe-482e-815c-13c9165d26d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:14.083646 kubelet[2635]: E1108 00:31:14.083586 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c9474b6f-jbmnh" podUID="a375dcd8-3fbe-482e-815c-13c9165d26d2" Nov 8 00:31:14.887773 kubelet[2635]: E1108 00:31:14.887709 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c9474b6f-jbmnh" podUID="a375dcd8-3fbe-482e-815c-13c9165d26d2" Nov 8 00:31:18.436594 containerd[1513]: time="2025-11-08T00:31:18.435808384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:31:18.872611 containerd[1513]: time="2025-11-08T00:31:18.872430413Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:18.874759 containerd[1513]: time="2025-11-08T00:31:18.874692600Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:31:18.875837 containerd[1513]: time="2025-11-08T00:31:18.875072458Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:31:18.876994 kubelet[2635]: E1108 00:31:18.876141 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:31:18.876994 kubelet[2635]: E1108 00:31:18.876207 2635 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:31:18.876994 kubelet[2635]: E1108 00:31:18.876394 2635 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d7e33296a79e41f49551734f14da812b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dshgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-989fcc88-m4sv7_calico-system(b4091b12-a1d1-44f0-9f31-4fa77fdbf9a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:18.879489 containerd[1513]: time="2025-11-08T00:31:18.879393735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:31:19.298938 containerd[1513]: time="2025-11-08T00:31:19.298846173Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:19.300991 containerd[1513]: time="2025-11-08T00:31:19.300890044Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:31:19.302157 containerd[1513]: time="2025-11-08T00:31:19.301074658Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:31:19.302254 kubelet[2635]: E1108 00:31:19.301292 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:31:19.302254 kubelet[2635]: E1108 00:31:19.301357 2635 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:31:19.302254 kubelet[2635]: E1108 00:31:19.301566 2635 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dshgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-989fcc88-m4sv7_calico-system(b4091b12-a1d1-44f0-9f31-4fa77fdbf9a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:19.304198 kubelet[2635]: E1108 00:31:19.303319 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-989fcc88-m4sv7" podUID="b4091b12-a1d1-44f0-9f31-4fa77fdbf9a2" Nov 8 00:31:25.432929 containerd[1513]: time="2025-11-08T00:31:25.432846713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:31:25.882015 containerd[1513]: time="2025-11-08T00:31:25.881932400Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:25.884647 containerd[1513]: time="2025-11-08T00:31:25.884369627Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:31:25.884647 containerd[1513]: time="2025-11-08T00:31:25.884519236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:31:25.884820 kubelet[2635]: E1108 00:31:25.884742 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:31:25.885230 kubelet[2635]: E1108 00:31:25.884828 2635 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:31:25.885230 kubelet[2635]: E1108 00:31:25.885156 2635 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4m2rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-m4hcx_calico-system(7a730453-478d-46fd-915f-5cbf5e28b105): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:25.886747 containerd[1513]: time="2025-11-08T00:31:25.886548032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:31:26.364791 containerd[1513]: time="2025-11-08T00:31:26.364664161Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:26.368791 containerd[1513]: time="2025-11-08T00:31:26.366828711Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:31:26.368791 containerd[1513]: time="2025-11-08T00:31:26.366883754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:31:26.368791 containerd[1513]: time="2025-11-08T00:31:26.368479913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:31:26.369041 kubelet[2635]: E1108 00:31:26.367097 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:31:26.369041 kubelet[2635]: E1108 00:31:26.367200 2635 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:31:26.369041 kubelet[2635]: E1108 00:31:26.367657 2635 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-49q75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64c9474b6f-jbmnh_calico-apiserver(a375dcd8-3fbe-482e-815c-13c9165d26d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:26.369732 kubelet[2635]: E1108 00:31:26.369669 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c9474b6f-jbmnh" podUID="a375dcd8-3fbe-482e-815c-13c9165d26d2" Nov 8 00:31:26.815639 containerd[1513]: time="2025-11-08T00:31:26.815574647Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:26.817648 containerd[1513]: time="2025-11-08T00:31:26.817564701Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:31:26.817748 containerd[1513]: time="2025-11-08T00:31:26.817683522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:31:26.817976 kubelet[2635]: E1108 00:31:26.817906 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:31:26.818060 kubelet[2635]: E1108 00:31:26.817980 2635 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:31:26.819004 containerd[1513]: time="2025-11-08T00:31:26.818460462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:31:26.819093 kubelet[2635]: E1108 00:31:26.818863 2635 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4m2rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-m4hcx_calico-system(7a730453-478d-46fd-915f-5cbf5e28b105): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:26.821115 kubelet[2635]: E1108 00:31:26.821010 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m4hcx" podUID="7a730453-478d-46fd-915f-5cbf5e28b105" Nov 8 00:31:27.287176 containerd[1513]: time="2025-11-08T00:31:27.287082638Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:27.289268 containerd[1513]: time="2025-11-08T00:31:27.289056872Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:31:27.289268 containerd[1513]: time="2025-11-08T00:31:27.289101266Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:31:27.289523 kubelet[2635]: E1108 00:31:27.289441 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:31:27.289995 kubelet[2635]: E1108 00:31:27.289531 2635 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:31:27.289995 kubelet[2635]: E1108 00:31:27.289783 2635 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n49w7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5b858bf6c9-q5c9k_calico-system(7bfc75bc-86f9-445a-9d64-08b33fc703e4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:27.292144 kubelet[2635]: E1108 00:31:27.292072 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b858bf6c9-q5c9k" podUID="7bfc75bc-86f9-445a-9d64-08b33fc703e4" Nov 8 00:31:27.432130 containerd[1513]: time="2025-11-08T00:31:27.431903772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:31:27.883610 containerd[1513]: time="2025-11-08T00:31:27.883531217Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:27.886030 containerd[1513]: time="2025-11-08T00:31:27.885891092Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:31:27.886030 containerd[1513]: time="2025-11-08T00:31:27.885957645Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:31:27.886341 kubelet[2635]: E1108 00:31:27.886224 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:31:27.886341 kubelet[2635]: E1108 00:31:27.886288 2635 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:31:27.887226 kubelet[2635]: E1108 00:31:27.886756 2635 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-595cr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-p4b9r_calico-system(8b84b572-8061-4813-b462-3eea6f974bdd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:27.887866 containerd[1513]: time="2025-11-08T00:31:27.886899203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:31:27.889059 kubelet[2635]: E1108 00:31:27.889018 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p4b9r" podUID="8b84b572-8061-4813-b462-3eea6f974bdd" Nov 8 00:31:28.338080 containerd[1513]: time="2025-11-08T00:31:28.337991516Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:28.340073 containerd[1513]: time="2025-11-08T00:31:28.339871295Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:31:28.340073 containerd[1513]: time="2025-11-08T00:31:28.339984226Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:31:28.341369 kubelet[2635]: E1108 00:31:28.340509 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:31:28.341369 kubelet[2635]: E1108 00:31:28.340579 2635 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:31:28.341369 kubelet[2635]: E1108 00:31:28.340793 2635 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlzmd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64c9474b6f-brv5c_calico-apiserver(562850b7-c26f-461d-bfe8-c8a199e1bb8c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:28.342181 kubelet[2635]: E1108 00:31:28.342111 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c9474b6f-brv5c" podUID="562850b7-c26f-461d-bfe8-c8a199e1bb8c" Nov 8 00:31:28.451090 containerd[1513]: time="2025-11-08T00:31:28.451025408Z" level=info msg="StopPodSandbox for \"035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3\"" Nov 8 00:31:28.550318 containerd[1513]: 2025-11-08 00:31:28.495 [WARNING][4916] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--brv5c-eth0", GenerateName:"calico-apiserver-64c9474b6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"562850b7-c26f-461d-bfe8-c8a199e1bb8c", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64c9474b6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-dcea41702a", ContainerID:"13dd5c6f1e942c1868ac9d15c7e1af7033f411cbaa33df6dd5acb050f2aa4dd7", Pod:"calico-apiserver-64c9474b6f-brv5c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b0f37d1dd6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:28.550318 containerd[1513]: 2025-11-08 00:31:28.495 [INFO][4916] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" Nov 8 00:31:28.550318 containerd[1513]: 2025-11-08 00:31:28.496 [INFO][4916] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" iface="eth0" netns="" Nov 8 00:31:28.550318 containerd[1513]: 2025-11-08 00:31:28.496 [INFO][4916] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" Nov 8 00:31:28.550318 containerd[1513]: 2025-11-08 00:31:28.496 [INFO][4916] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" Nov 8 00:31:28.550318 containerd[1513]: 2025-11-08 00:31:28.536 [INFO][4924] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" HandleID="k8s-pod-network.035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--brv5c-eth0" Nov 8 00:31:28.550318 containerd[1513]: 2025-11-08 00:31:28.536 [INFO][4924] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:28.550318 containerd[1513]: 2025-11-08 00:31:28.536 [INFO][4924] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:28.550318 containerd[1513]: 2025-11-08 00:31:28.545 [WARNING][4924] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" HandleID="k8s-pod-network.035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--brv5c-eth0" Nov 8 00:31:28.550318 containerd[1513]: 2025-11-08 00:31:28.545 [INFO][4924] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" HandleID="k8s-pod-network.035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--brv5c-eth0" Nov 8 00:31:28.550318 containerd[1513]: 2025-11-08 00:31:28.546 [INFO][4924] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:28.550318 containerd[1513]: 2025-11-08 00:31:28.548 [INFO][4916] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" Nov 8 00:31:28.550875 containerd[1513]: time="2025-11-08T00:31:28.550372651Z" level=info msg="TearDown network for sandbox \"035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3\" successfully" Nov 8 00:31:28.550875 containerd[1513]: time="2025-11-08T00:31:28.550454564Z" level=info msg="StopPodSandbox for \"035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3\" returns successfully" Nov 8 00:31:28.550875 containerd[1513]: time="2025-11-08T00:31:28.551342281Z" level=info msg="RemovePodSandbox for \"035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3\"" Nov 8 00:31:28.550875 containerd[1513]: time="2025-11-08T00:31:28.551364443Z" level=info msg="Forcibly stopping sandbox \"035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3\"" Nov 8 00:31:28.614273 containerd[1513]: 2025-11-08 00:31:28.582 [WARNING][4938] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--brv5c-eth0", GenerateName:"calico-apiserver-64c9474b6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"562850b7-c26f-461d-bfe8-c8a199e1bb8c", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64c9474b6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-dcea41702a", ContainerID:"13dd5c6f1e942c1868ac9d15c7e1af7033f411cbaa33df6dd5acb050f2aa4dd7", Pod:"calico-apiserver-64c9474b6f-brv5c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b0f37d1dd6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:28.614273 containerd[1513]: 2025-11-08 00:31:28.582 [INFO][4938] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" Nov 8 00:31:28.614273 containerd[1513]: 2025-11-08 00:31:28.582 [INFO][4938] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" iface="eth0" netns="" Nov 8 00:31:28.614273 containerd[1513]: 2025-11-08 00:31:28.582 [INFO][4938] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" Nov 8 00:31:28.614273 containerd[1513]: 2025-11-08 00:31:28.582 [INFO][4938] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" Nov 8 00:31:28.614273 containerd[1513]: 2025-11-08 00:31:28.604 [INFO][4945] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" HandleID="k8s-pod-network.035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--brv5c-eth0" Nov 8 00:31:28.614273 containerd[1513]: 2025-11-08 00:31:28.604 [INFO][4945] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:28.614273 containerd[1513]: 2025-11-08 00:31:28.604 [INFO][4945] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:28.614273 containerd[1513]: 2025-11-08 00:31:28.609 [WARNING][4945] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" HandleID="k8s-pod-network.035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--brv5c-eth0" Nov 8 00:31:28.614273 containerd[1513]: 2025-11-08 00:31:28.609 [INFO][4945] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" HandleID="k8s-pod-network.035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--brv5c-eth0" Nov 8 00:31:28.614273 containerd[1513]: 2025-11-08 00:31:28.611 [INFO][4945] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:28.614273 containerd[1513]: 2025-11-08 00:31:28.612 [INFO][4938] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3" Nov 8 00:31:28.615261 containerd[1513]: time="2025-11-08T00:31:28.614259428Z" level=info msg="TearDown network for sandbox \"035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3\" successfully" Nov 8 00:31:28.625037 containerd[1513]: time="2025-11-08T00:31:28.624990387Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:31:28.625037 containerd[1513]: time="2025-11-08T00:31:28.625040380Z" level=info msg="RemovePodSandbox \"035debdf15718a94d28861e0a5e70885ebe54a0697c8dc472c487af8cc4938d3\" returns successfully" Nov 8 00:31:28.625585 containerd[1513]: time="2025-11-08T00:31:28.625550974Z" level=info msg="StopPodSandbox for \"d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c\"" Nov 8 00:31:28.687648 containerd[1513]: 2025-11-08 00:31:28.654 [WARNING][4960] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--lf6dp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0bf2f9f4-97c3-4ca2-a71b-3b614c8ed2e5", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-dcea41702a", ContainerID:"f6e37b6d5919aa65cb1d0a893994795946b9fc67bdc7a7fe28cfa0d7da7a86e5", Pod:"coredns-674b8bbfcf-lf6dp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib6d4890cdca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:28.687648 containerd[1513]: 2025-11-08 00:31:28.654 [INFO][4960] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" Nov 8 00:31:28.687648 containerd[1513]: 2025-11-08 00:31:28.654 [INFO][4960] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" iface="eth0" netns="" Nov 8 00:31:28.687648 containerd[1513]: 2025-11-08 00:31:28.654 [INFO][4960] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" Nov 8 00:31:28.687648 containerd[1513]: 2025-11-08 00:31:28.654 [INFO][4960] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" Nov 8 00:31:28.687648 containerd[1513]: 2025-11-08 00:31:28.675 [INFO][4967] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" HandleID="k8s-pod-network.d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" Workload="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--lf6dp-eth0" Nov 8 00:31:28.687648 containerd[1513]: 2025-11-08 00:31:28.675 [INFO][4967] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:28.687648 containerd[1513]: 2025-11-08 00:31:28.675 [INFO][4967] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:28.687648 containerd[1513]: 2025-11-08 00:31:28.681 [WARNING][4967] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" HandleID="k8s-pod-network.d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" Workload="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--lf6dp-eth0" Nov 8 00:31:28.687648 containerd[1513]: 2025-11-08 00:31:28.681 [INFO][4967] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" HandleID="k8s-pod-network.d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" Workload="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--lf6dp-eth0" Nov 8 00:31:28.687648 containerd[1513]: 2025-11-08 00:31:28.683 [INFO][4967] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:28.687648 containerd[1513]: 2025-11-08 00:31:28.685 [INFO][4960] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" Nov 8 00:31:28.688179 containerd[1513]: time="2025-11-08T00:31:28.687685850Z" level=info msg="TearDown network for sandbox \"d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c\" successfully" Nov 8 00:31:28.688179 containerd[1513]: time="2025-11-08T00:31:28.687712610Z" level=info msg="StopPodSandbox for \"d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c\" returns successfully" Nov 8 00:31:28.688466 containerd[1513]: time="2025-11-08T00:31:28.688374275Z" level=info msg="RemovePodSandbox for \"d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c\"" Nov 8 00:31:28.688551 containerd[1513]: time="2025-11-08T00:31:28.688523805Z" level=info msg="Forcibly stopping sandbox \"d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c\"" Nov 8 00:31:28.768514 containerd[1513]: 2025-11-08 00:31:28.727 [WARNING][4981] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--lf6dp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0bf2f9f4-97c3-4ca2-a71b-3b614c8ed2e5", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-dcea41702a", ContainerID:"f6e37b6d5919aa65cb1d0a893994795946b9fc67bdc7a7fe28cfa0d7da7a86e5", Pod:"coredns-674b8bbfcf-lf6dp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib6d4890cdca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:28.768514 containerd[1513]: 2025-11-08 00:31:28.728 [INFO][4981] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" Nov 8 00:31:28.768514 containerd[1513]: 2025-11-08 00:31:28.728 [INFO][4981] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" iface="eth0" netns="" Nov 8 00:31:28.768514 containerd[1513]: 2025-11-08 00:31:28.728 [INFO][4981] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" Nov 8 00:31:28.768514 containerd[1513]: 2025-11-08 00:31:28.728 [INFO][4981] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" Nov 8 00:31:28.768514 containerd[1513]: 2025-11-08 00:31:28.753 [INFO][4988] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" HandleID="k8s-pod-network.d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" Workload="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--lf6dp-eth0" Nov 8 00:31:28.768514 containerd[1513]: 2025-11-08 00:31:28.753 [INFO][4988] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:28.768514 containerd[1513]: 2025-11-08 00:31:28.753 [INFO][4988] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:28.768514 containerd[1513]: 2025-11-08 00:31:28.761 [WARNING][4988] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" HandleID="k8s-pod-network.d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" Workload="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--lf6dp-eth0" Nov 8 00:31:28.768514 containerd[1513]: 2025-11-08 00:31:28.762 [INFO][4988] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" HandleID="k8s-pod-network.d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" Workload="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--lf6dp-eth0" Nov 8 00:31:28.768514 containerd[1513]: 2025-11-08 00:31:28.763 [INFO][4988] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:28.768514 containerd[1513]: 2025-11-08 00:31:28.766 [INFO][4981] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c" Nov 8 00:31:28.769010 containerd[1513]: time="2025-11-08T00:31:28.768569792Z" level=info msg="TearDown network for sandbox \"d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c\" successfully" Nov 8 00:31:28.774739 containerd[1513]: time="2025-11-08T00:31:28.774536118Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:31:28.774739 containerd[1513]: time="2025-11-08T00:31:28.774623221Z" level=info msg="RemovePodSandbox \"d76ac73cc706d5615cff0f333930a5fb8ac1d3ff30f2b60162f79d29073d324c\" returns successfully" Nov 8 00:31:28.775438 containerd[1513]: time="2025-11-08T00:31:28.775230484Z" level=info msg="StopPodSandbox for \"3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b\"" Nov 8 00:31:28.887040 containerd[1513]: 2025-11-08 00:31:28.839 [WARNING][5002] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--dcea41702a-k8s-csi--node--driver--m4hcx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7a730453-478d-46fd-915f-5cbf5e28b105", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-dcea41702a", ContainerID:"0bc82b60d107b4e1bf400ff8eead068a67a2628943a7533a5fe374678ca0cb96", Pod:"csi-node-driver-m4hcx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.58.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia32d007b9de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:28.887040 containerd[1513]: 2025-11-08 00:31:28.839 [INFO][5002] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" Nov 8 00:31:28.887040 containerd[1513]: 2025-11-08 00:31:28.840 [INFO][5002] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" iface="eth0" netns="" Nov 8 00:31:28.887040 containerd[1513]: 2025-11-08 00:31:28.840 [INFO][5002] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" Nov 8 00:31:28.887040 containerd[1513]: 2025-11-08 00:31:28.840 [INFO][5002] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" Nov 8 00:31:28.887040 containerd[1513]: 2025-11-08 00:31:28.872 [INFO][5009] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" HandleID="k8s-pod-network.3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" Workload="ci--4081--3--6--n--dcea41702a-k8s-csi--node--driver--m4hcx-eth0" Nov 8 00:31:28.887040 containerd[1513]: 2025-11-08 00:31:28.872 [INFO][5009] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:28.887040 containerd[1513]: 2025-11-08 00:31:28.872 [INFO][5009] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:28.887040 containerd[1513]: 2025-11-08 00:31:28.880 [WARNING][5009] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" HandleID="k8s-pod-network.3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" Workload="ci--4081--3--6--n--dcea41702a-k8s-csi--node--driver--m4hcx-eth0" Nov 8 00:31:28.887040 containerd[1513]: 2025-11-08 00:31:28.880 [INFO][5009] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" HandleID="k8s-pod-network.3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" Workload="ci--4081--3--6--n--dcea41702a-k8s-csi--node--driver--m4hcx-eth0" Nov 8 00:31:28.887040 containerd[1513]: 2025-11-08 00:31:28.882 [INFO][5009] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:28.887040 containerd[1513]: 2025-11-08 00:31:28.884 [INFO][5002] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" Nov 8 00:31:28.888227 containerd[1513]: time="2025-11-08T00:31:28.888031059Z" level=info msg="TearDown network for sandbox \"3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b\" successfully" Nov 8 00:31:28.888227 containerd[1513]: time="2025-11-08T00:31:28.888069221Z" level=info msg="StopPodSandbox for \"3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b\" returns successfully" Nov 8 00:31:28.888947 containerd[1513]: time="2025-11-08T00:31:28.888898419Z" level=info msg="RemovePodSandbox for \"3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b\"" Nov 8 00:31:28.889017 containerd[1513]: time="2025-11-08T00:31:28.888948462Z" level=info msg="Forcibly stopping sandbox \"3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b\"" Nov 8 00:31:28.979017 containerd[1513]: 2025-11-08 00:31:28.931 [WARNING][5023] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--dcea41702a-k8s-csi--node--driver--m4hcx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7a730453-478d-46fd-915f-5cbf5e28b105", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-dcea41702a", ContainerID:"0bc82b60d107b4e1bf400ff8eead068a67a2628943a7533a5fe374678ca0cb96", Pod:"csi-node-driver-m4hcx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.58.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia32d007b9de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:28.979017 containerd[1513]: 2025-11-08 00:31:28.931 [INFO][5023] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" Nov 8 00:31:28.979017 containerd[1513]: 2025-11-08 00:31:28.931 [INFO][5023] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" iface="eth0" netns="" Nov 8 00:31:28.979017 containerd[1513]: 2025-11-08 00:31:28.931 [INFO][5023] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" Nov 8 00:31:28.979017 containerd[1513]: 2025-11-08 00:31:28.931 [INFO][5023] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" Nov 8 00:31:28.979017 containerd[1513]: 2025-11-08 00:31:28.962 [INFO][5030] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" HandleID="k8s-pod-network.3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" Workload="ci--4081--3--6--n--dcea41702a-k8s-csi--node--driver--m4hcx-eth0" Nov 8 00:31:28.979017 containerd[1513]: 2025-11-08 00:31:28.962 [INFO][5030] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:28.979017 containerd[1513]: 2025-11-08 00:31:28.962 [INFO][5030] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:28.979017 containerd[1513]: 2025-11-08 00:31:28.970 [WARNING][5030] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" HandleID="k8s-pod-network.3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" Workload="ci--4081--3--6--n--dcea41702a-k8s-csi--node--driver--m4hcx-eth0" Nov 8 00:31:28.979017 containerd[1513]: 2025-11-08 00:31:28.970 [INFO][5030] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" HandleID="k8s-pod-network.3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" Workload="ci--4081--3--6--n--dcea41702a-k8s-csi--node--driver--m4hcx-eth0" Nov 8 00:31:28.979017 containerd[1513]: 2025-11-08 00:31:28.973 [INFO][5030] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:28.979017 containerd[1513]: 2025-11-08 00:31:28.975 [INFO][5023] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b" Nov 8 00:31:28.979017 containerd[1513]: time="2025-11-08T00:31:28.977592479Z" level=info msg="TearDown network for sandbox \"3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b\" successfully" Nov 8 00:31:28.981759 containerd[1513]: time="2025-11-08T00:31:28.981707519Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:31:28.981759 containerd[1513]: time="2025-11-08T00:31:28.981764616Z" level=info msg="RemovePodSandbox \"3c74528a9654c9f7c5e7ff0214df036fe9da617824113227a9cea340868fa00b\" returns successfully" Nov 8 00:31:28.982394 containerd[1513]: time="2025-11-08T00:31:28.982347935Z" level=info msg="StopPodSandbox for \"26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190\"" Nov 8 00:31:29.065050 containerd[1513]: 2025-11-08 00:31:29.024 [WARNING][5044] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--jbmnh-eth0", GenerateName:"calico-apiserver-64c9474b6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"a375dcd8-3fbe-482e-815c-13c9165d26d2", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64c9474b6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-dcea41702a", ContainerID:"26798eaa6a159e0c4fca6fe16ea67e8df4b37a74f276769fcddf8b74a5214a8b", Pod:"calico-apiserver-64c9474b6f-jbmnh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie87a9e69999", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:29.065050 containerd[1513]: 2025-11-08 00:31:29.024 [INFO][5044] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" Nov 8 00:31:29.065050 containerd[1513]: 2025-11-08 00:31:29.024 [INFO][5044] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" iface="eth0" netns="" Nov 8 00:31:29.065050 containerd[1513]: 2025-11-08 00:31:29.024 [INFO][5044] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" Nov 8 00:31:29.065050 containerd[1513]: 2025-11-08 00:31:29.024 [INFO][5044] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" Nov 8 00:31:29.065050 containerd[1513]: 2025-11-08 00:31:29.050 [INFO][5051] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" HandleID="k8s-pod-network.26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--jbmnh-eth0" Nov 8 00:31:29.065050 containerd[1513]: 2025-11-08 00:31:29.050 [INFO][5051] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:29.065050 containerd[1513]: 2025-11-08 00:31:29.050 [INFO][5051] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:29.065050 containerd[1513]: 2025-11-08 00:31:29.057 [WARNING][5051] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" HandleID="k8s-pod-network.26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--jbmnh-eth0" Nov 8 00:31:29.065050 containerd[1513]: 2025-11-08 00:31:29.057 [INFO][5051] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" HandleID="k8s-pod-network.26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--jbmnh-eth0" Nov 8 00:31:29.065050 containerd[1513]: 2025-11-08 00:31:29.060 [INFO][5051] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:29.065050 containerd[1513]: 2025-11-08 00:31:29.062 [INFO][5044] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" Nov 8 00:31:29.066681 containerd[1513]: time="2025-11-08T00:31:29.065084347Z" level=info msg="TearDown network for sandbox \"26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190\" successfully" Nov 8 00:31:29.066681 containerd[1513]: time="2025-11-08T00:31:29.065122898Z" level=info msg="StopPodSandbox for \"26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190\" returns successfully" Nov 8 00:31:29.066681 containerd[1513]: time="2025-11-08T00:31:29.065798230Z" level=info msg="RemovePodSandbox for \"26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190\"" Nov 8 00:31:29.066681 containerd[1513]: time="2025-11-08T00:31:29.065831921Z" level=info msg="Forcibly stopping sandbox \"26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190\"" Nov 8 00:31:29.151699 containerd[1513]: 2025-11-08 00:31:29.105 [WARNING][5065] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--jbmnh-eth0", GenerateName:"calico-apiserver-64c9474b6f-", Namespace:"calico-apiserver", SelfLink:"", UID:"a375dcd8-3fbe-482e-815c-13c9165d26d2", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64c9474b6f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-dcea41702a", ContainerID:"26798eaa6a159e0c4fca6fe16ea67e8df4b37a74f276769fcddf8b74a5214a8b", Pod:"calico-apiserver-64c9474b6f-jbmnh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie87a9e69999", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:29.151699 containerd[1513]: 2025-11-08 00:31:29.105 [INFO][5065] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" Nov 8 00:31:29.151699 containerd[1513]: 2025-11-08 00:31:29.105 [INFO][5065] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" iface="eth0" netns="" Nov 8 00:31:29.151699 containerd[1513]: 2025-11-08 00:31:29.105 [INFO][5065] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" Nov 8 00:31:29.151699 containerd[1513]: 2025-11-08 00:31:29.105 [INFO][5065] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" Nov 8 00:31:29.151699 containerd[1513]: 2025-11-08 00:31:29.134 [INFO][5073] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" HandleID="k8s-pod-network.26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--jbmnh-eth0" Nov 8 00:31:29.151699 containerd[1513]: 2025-11-08 00:31:29.134 [INFO][5073] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:29.151699 containerd[1513]: 2025-11-08 00:31:29.134 [INFO][5073] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:29.151699 containerd[1513]: 2025-11-08 00:31:29.143 [WARNING][5073] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" HandleID="k8s-pod-network.26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--jbmnh-eth0" Nov 8 00:31:29.151699 containerd[1513]: 2025-11-08 00:31:29.143 [INFO][5073] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" HandleID="k8s-pod-network.26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--apiserver--64c9474b6f--jbmnh-eth0" Nov 8 00:31:29.151699 containerd[1513]: 2025-11-08 00:31:29.146 [INFO][5073] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:29.151699 containerd[1513]: 2025-11-08 00:31:29.148 [INFO][5065] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190" Nov 8 00:31:29.151699 containerd[1513]: time="2025-11-08T00:31:29.151636675Z" level=info msg="TearDown network for sandbox \"26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190\" successfully" Nov 8 00:31:29.160717 containerd[1513]: time="2025-11-08T00:31:29.160625715Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:31:29.160885 containerd[1513]: time="2025-11-08T00:31:29.160727876Z" level=info msg="RemovePodSandbox \"26337d2d93e4cf334eb08f47d74a660e344a7fb45b55899a7113cdc53f334190\" returns successfully" Nov 8 00:31:29.161904 containerd[1513]: time="2025-11-08T00:31:29.161271250Z" level=info msg="StopPodSandbox for \"27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9\"" Nov 8 00:31:29.258082 containerd[1513]: 2025-11-08 00:31:29.204 [WARNING][5087] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--dcea41702a-k8s-calico--kube--controllers--5b858bf6c9--q5c9k-eth0", GenerateName:"calico-kube-controllers-5b858bf6c9-", Namespace:"calico-system", SelfLink:"", UID:"7bfc75bc-86f9-445a-9d64-08b33fc703e4", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b858bf6c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-dcea41702a", ContainerID:"1a2501867a869df5934ac6531a6d0fc87101af50d2a5c076592b199014706d57", Pod:"calico-kube-controllers-5b858bf6c9-q5c9k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.58.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali52d3a186381", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:29.258082 containerd[1513]: 2025-11-08 00:31:29.204 [INFO][5087] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" Nov 8 00:31:29.258082 containerd[1513]: 2025-11-08 00:31:29.204 [INFO][5087] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" iface="eth0" netns="" Nov 8 00:31:29.258082 containerd[1513]: 2025-11-08 00:31:29.204 [INFO][5087] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" Nov 8 00:31:29.258082 containerd[1513]: 2025-11-08 00:31:29.204 [INFO][5087] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" Nov 8 00:31:29.258082 containerd[1513]: 2025-11-08 00:31:29.237 [INFO][5095] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" HandleID="k8s-pod-network.27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--kube--controllers--5b858bf6c9--q5c9k-eth0" Nov 8 00:31:29.258082 containerd[1513]: 2025-11-08 00:31:29.238 [INFO][5095] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:29.258082 containerd[1513]: 2025-11-08 00:31:29.238 [INFO][5095] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:29.258082 containerd[1513]: 2025-11-08 00:31:29.248 [WARNING][5095] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" HandleID="k8s-pod-network.27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--kube--controllers--5b858bf6c9--q5c9k-eth0" Nov 8 00:31:29.258082 containerd[1513]: 2025-11-08 00:31:29.248 [INFO][5095] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" HandleID="k8s-pod-network.27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--kube--controllers--5b858bf6c9--q5c9k-eth0" Nov 8 00:31:29.258082 containerd[1513]: 2025-11-08 00:31:29.251 [INFO][5095] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:29.258082 containerd[1513]: 2025-11-08 00:31:29.254 [INFO][5087] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" Nov 8 00:31:29.258980 containerd[1513]: time="2025-11-08T00:31:29.258192986Z" level=info msg="TearDown network for sandbox \"27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9\" successfully" Nov 8 00:31:29.258980 containerd[1513]: time="2025-11-08T00:31:29.258259079Z" level=info msg="StopPodSandbox for \"27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9\" returns successfully" Nov 8 00:31:29.259576 containerd[1513]: time="2025-11-08T00:31:29.259517698Z" level=info msg="RemovePodSandbox for \"27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9\"" Nov 8 00:31:29.259576 containerd[1513]: time="2025-11-08T00:31:29.259568694Z" level=info msg="Forcibly stopping sandbox \"27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9\"" Nov 8 00:31:29.373422 containerd[1513]: 2025-11-08 00:31:29.319 [WARNING][5109] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--dcea41702a-k8s-calico--kube--controllers--5b858bf6c9--q5c9k-eth0", GenerateName:"calico-kube-controllers-5b858bf6c9-", Namespace:"calico-system", SelfLink:"", UID:"7bfc75bc-86f9-445a-9d64-08b33fc703e4", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5b858bf6c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-dcea41702a", ContainerID:"1a2501867a869df5934ac6531a6d0fc87101af50d2a5c076592b199014706d57", Pod:"calico-kube-controllers-5b858bf6c9-q5c9k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.58.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali52d3a186381", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:29.373422 containerd[1513]: 2025-11-08 00:31:29.320 [INFO][5109] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" Nov 8 00:31:29.373422 containerd[1513]: 2025-11-08 00:31:29.320 [INFO][5109] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" iface="eth0" netns="" Nov 8 00:31:29.373422 containerd[1513]: 2025-11-08 00:31:29.320 [INFO][5109] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" Nov 8 00:31:29.373422 containerd[1513]: 2025-11-08 00:31:29.320 [INFO][5109] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" Nov 8 00:31:29.373422 containerd[1513]: 2025-11-08 00:31:29.356 [INFO][5116] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" HandleID="k8s-pod-network.27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--kube--controllers--5b858bf6c9--q5c9k-eth0" Nov 8 00:31:29.373422 containerd[1513]: 2025-11-08 00:31:29.356 [INFO][5116] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:29.373422 containerd[1513]: 2025-11-08 00:31:29.356 [INFO][5116] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:29.373422 containerd[1513]: 2025-11-08 00:31:29.366 [WARNING][5116] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" HandleID="k8s-pod-network.27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--kube--controllers--5b858bf6c9--q5c9k-eth0" Nov 8 00:31:29.373422 containerd[1513]: 2025-11-08 00:31:29.367 [INFO][5116] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" HandleID="k8s-pod-network.27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" Workload="ci--4081--3--6--n--dcea41702a-k8s-calico--kube--controllers--5b858bf6c9--q5c9k-eth0" Nov 8 00:31:29.373422 containerd[1513]: 2025-11-08 00:31:29.368 [INFO][5116] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:29.373422 containerd[1513]: 2025-11-08 00:31:29.371 [INFO][5109] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9" Nov 8 00:31:29.373937 containerd[1513]: time="2025-11-08T00:31:29.373472921Z" level=info msg="TearDown network for sandbox \"27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9\" successfully" Nov 8 00:31:29.379033 containerd[1513]: time="2025-11-08T00:31:29.378978148Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:31:29.379113 containerd[1513]: time="2025-11-08T00:31:29.379056014Z" level=info msg="RemovePodSandbox \"27b2a77bcdfcda3cbb83d79f9f6819a91589a2477e23586db8ba2afc9a9d5bf9\" returns successfully" Nov 8 00:31:29.379735 containerd[1513]: time="2025-11-08T00:31:29.379698552Z" level=info msg="StopPodSandbox for \"df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a\"" Nov 8 00:31:29.470497 containerd[1513]: 2025-11-08 00:31:29.423 [WARNING][5130] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-whisker--97cb9cc46--6qg4s-eth0" Nov 8 00:31:29.470497 containerd[1513]: 2025-11-08 00:31:29.423 [INFO][5130] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" Nov 8 00:31:29.470497 containerd[1513]: 2025-11-08 00:31:29.423 [INFO][5130] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" iface="eth0" netns="" Nov 8 00:31:29.470497 containerd[1513]: 2025-11-08 00:31:29.423 [INFO][5130] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" Nov 8 00:31:29.470497 containerd[1513]: 2025-11-08 00:31:29.423 [INFO][5130] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" Nov 8 00:31:29.470497 containerd[1513]: 2025-11-08 00:31:29.454 [INFO][5138] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" HandleID="k8s-pod-network.df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" Workload="ci--4081--3--6--n--dcea41702a-k8s-whisker--97cb9cc46--6qg4s-eth0" Nov 8 00:31:29.470497 containerd[1513]: 2025-11-08 00:31:29.454 [INFO][5138] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:29.470497 containerd[1513]: 2025-11-08 00:31:29.454 [INFO][5138] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:29.470497 containerd[1513]: 2025-11-08 00:31:29.463 [WARNING][5138] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" HandleID="k8s-pod-network.df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" Workload="ci--4081--3--6--n--dcea41702a-k8s-whisker--97cb9cc46--6qg4s-eth0" Nov 8 00:31:29.470497 containerd[1513]: 2025-11-08 00:31:29.464 [INFO][5138] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" HandleID="k8s-pod-network.df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" Workload="ci--4081--3--6--n--dcea41702a-k8s-whisker--97cb9cc46--6qg4s-eth0" Nov 8 00:31:29.470497 containerd[1513]: 2025-11-08 00:31:29.466 [INFO][5138] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:29.470497 containerd[1513]: 2025-11-08 00:31:29.468 [INFO][5130] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" Nov 8 00:31:29.470892 containerd[1513]: time="2025-11-08T00:31:29.470544015Z" level=info msg="TearDown network for sandbox \"df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a\" successfully" Nov 8 00:31:29.470892 containerd[1513]: time="2025-11-08T00:31:29.470578329Z" level=info msg="StopPodSandbox for \"df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a\" returns successfully" Nov 8 00:31:29.471941 containerd[1513]: time="2025-11-08T00:31:29.471587323Z" level=info msg="RemovePodSandbox for \"df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a\"" Nov 8 00:31:29.471941 containerd[1513]: time="2025-11-08T00:31:29.471632467Z" level=info msg="Forcibly stopping sandbox \"df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a\"" Nov 8 00:31:29.567584 containerd[1513]: 2025-11-08 00:31:29.519 [WARNING][5152] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" WorkloadEndpoint="ci--4081--3--6--n--dcea41702a-k8s-whisker--97cb9cc46--6qg4s-eth0" Nov 8 00:31:29.567584 containerd[1513]: 2025-11-08 00:31:29.520 [INFO][5152] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" Nov 8 00:31:29.567584 containerd[1513]: 2025-11-08 00:31:29.520 [INFO][5152] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" iface="eth0" netns="" Nov 8 00:31:29.567584 containerd[1513]: 2025-11-08 00:31:29.520 [INFO][5152] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" Nov 8 00:31:29.567584 containerd[1513]: 2025-11-08 00:31:29.520 [INFO][5152] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" Nov 8 00:31:29.567584 containerd[1513]: 2025-11-08 00:31:29.554 [INFO][5159] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" HandleID="k8s-pod-network.df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" Workload="ci--4081--3--6--n--dcea41702a-k8s-whisker--97cb9cc46--6qg4s-eth0" Nov 8 00:31:29.567584 containerd[1513]: 2025-11-08 00:31:29.554 [INFO][5159] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:29.567584 containerd[1513]: 2025-11-08 00:31:29.554 [INFO][5159] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:29.567584 containerd[1513]: 2025-11-08 00:31:29.561 [WARNING][5159] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" HandleID="k8s-pod-network.df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" Workload="ci--4081--3--6--n--dcea41702a-k8s-whisker--97cb9cc46--6qg4s-eth0" Nov 8 00:31:29.567584 containerd[1513]: 2025-11-08 00:31:29.561 [INFO][5159] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" HandleID="k8s-pod-network.df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" Workload="ci--4081--3--6--n--dcea41702a-k8s-whisker--97cb9cc46--6qg4s-eth0" Nov 8 00:31:29.567584 containerd[1513]: 2025-11-08 00:31:29.563 [INFO][5159] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:29.567584 containerd[1513]: 2025-11-08 00:31:29.565 [INFO][5152] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a" Nov 8 00:31:29.568655 containerd[1513]: time="2025-11-08T00:31:29.568172871Z" level=info msg="TearDown network for sandbox \"df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a\" successfully" Nov 8 00:31:29.573481 containerd[1513]: time="2025-11-08T00:31:29.573434443Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:31:29.574077 containerd[1513]: time="2025-11-08T00:31:29.573502430Z" level=info msg="RemovePodSandbox \"df53ef4e04527c61b0ce1d1be4deb0b14b7015bbccd3799d7f0b134d2840931a\" returns successfully" Nov 8 00:31:29.574262 containerd[1513]: time="2025-11-08T00:31:29.574219809Z" level=info msg="StopPodSandbox for \"254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5\"" Nov 8 00:31:29.648443 containerd[1513]: 2025-11-08 00:31:29.610 [WARNING][5173] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--dcdpj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0f765679-4f0d-4ea9-957a-c1950533f8b3", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-dcea41702a", ContainerID:"1e90d5e53cb31b0a408fbd2a6766e1897bcf531270f1a0e3c132cfb6536fb3a3", Pod:"coredns-674b8bbfcf-dcdpj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8fe312b4e87", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:29.648443 containerd[1513]: 2025-11-08 00:31:29.610 [INFO][5173] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" Nov 8 00:31:29.648443 containerd[1513]: 2025-11-08 00:31:29.610 [INFO][5173] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" iface="eth0" netns="" Nov 8 00:31:29.648443 containerd[1513]: 2025-11-08 00:31:29.610 [INFO][5173] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" Nov 8 00:31:29.648443 containerd[1513]: 2025-11-08 00:31:29.610 [INFO][5173] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" Nov 8 00:31:29.648443 containerd[1513]: 2025-11-08 00:31:29.636 [INFO][5180] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" HandleID="k8s-pod-network.254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" Workload="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--dcdpj-eth0" Nov 8 00:31:29.648443 containerd[1513]: 2025-11-08 00:31:29.636 [INFO][5180] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:29.648443 containerd[1513]: 2025-11-08 00:31:29.636 [INFO][5180] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:29.648443 containerd[1513]: 2025-11-08 00:31:29.642 [WARNING][5180] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" HandleID="k8s-pod-network.254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" Workload="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--dcdpj-eth0" Nov 8 00:31:29.648443 containerd[1513]: 2025-11-08 00:31:29.642 [INFO][5180] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" HandleID="k8s-pod-network.254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" Workload="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--dcdpj-eth0" Nov 8 00:31:29.648443 containerd[1513]: 2025-11-08 00:31:29.644 [INFO][5180] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:29.648443 containerd[1513]: 2025-11-08 00:31:29.646 [INFO][5173] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" Nov 8 00:31:29.648443 containerd[1513]: time="2025-11-08T00:31:29.648258238Z" level=info msg="TearDown network for sandbox \"254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5\" successfully" Nov 8 00:31:29.648443 containerd[1513]: time="2025-11-08T00:31:29.648289025Z" level=info msg="StopPodSandbox for \"254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5\" returns successfully" Nov 8 00:31:29.649345 containerd[1513]: time="2025-11-08T00:31:29.649170610Z" level=info msg="RemovePodSandbox for \"254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5\"" Nov 8 00:31:29.649345 containerd[1513]: time="2025-11-08T00:31:29.649202069Z" level=info msg="Forcibly stopping sandbox \"254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5\"" Nov 8 00:31:29.717549 containerd[1513]: 2025-11-08 00:31:29.684 [WARNING][5194] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--dcdpj-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0f765679-4f0d-4ea9-957a-c1950533f8b3", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-dcea41702a", ContainerID:"1e90d5e53cb31b0a408fbd2a6766e1897bcf531270f1a0e3c132cfb6536fb3a3", Pod:"coredns-674b8bbfcf-dcdpj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8fe312b4e87", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:29.717549 containerd[1513]: 2025-11-08 00:31:29.684 [INFO][5194] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" Nov 8 00:31:29.717549 containerd[1513]: 2025-11-08 00:31:29.684 [INFO][5194] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" iface="eth0" netns="" Nov 8 00:31:29.717549 containerd[1513]: 2025-11-08 00:31:29.684 [INFO][5194] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" Nov 8 00:31:29.717549 containerd[1513]: 2025-11-08 00:31:29.684 [INFO][5194] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" Nov 8 00:31:29.717549 containerd[1513]: 2025-11-08 00:31:29.705 [INFO][5201] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" HandleID="k8s-pod-network.254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" Workload="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--dcdpj-eth0" Nov 8 00:31:29.717549 containerd[1513]: 2025-11-08 00:31:29.705 [INFO][5201] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:29.717549 containerd[1513]: 2025-11-08 00:31:29.706 [INFO][5201] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:29.717549 containerd[1513]: 2025-11-08 00:31:29.711 [WARNING][5201] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" HandleID="k8s-pod-network.254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" Workload="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--dcdpj-eth0" Nov 8 00:31:29.717549 containerd[1513]: 2025-11-08 00:31:29.711 [INFO][5201] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" HandleID="k8s-pod-network.254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" Workload="ci--4081--3--6--n--dcea41702a-k8s-coredns--674b8bbfcf--dcdpj-eth0" Nov 8 00:31:29.717549 containerd[1513]: 2025-11-08 00:31:29.712 [INFO][5201] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:29.717549 containerd[1513]: 2025-11-08 00:31:29.714 [INFO][5194] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5" Nov 8 00:31:29.718850 containerd[1513]: time="2025-11-08T00:31:29.717595180Z" level=info msg="TearDown network for sandbox \"254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5\" successfully" Nov 8 00:31:29.722281 containerd[1513]: time="2025-11-08T00:31:29.721934340Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:31:29.722281 containerd[1513]: time="2025-11-08T00:31:29.722027193Z" level=info msg="RemovePodSandbox \"254146499eb0b845650e45f07c45588f69399f8bc87a047b13e32f23f49f8ee5\" returns successfully" Nov 8 00:31:29.724539 containerd[1513]: time="2025-11-08T00:31:29.724498025Z" level=info msg="StopPodSandbox for \"1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23\"" Nov 8 00:31:29.806229 containerd[1513]: 2025-11-08 00:31:29.764 [WARNING][5216] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--dcea41702a-k8s-goldmane--666569f655--p4b9r-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"8b84b572-8061-4813-b462-3eea6f974bdd", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-dcea41702a", ContainerID:"09f967e40086090c315b9cc3de5e1734edbc52016c4ebe0d1de8efe71f4a5f9c", Pod:"goldmane-666569f655-p4b9r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.58.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4f62a7351ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:29.806229 containerd[1513]: 2025-11-08 00:31:29.764 [INFO][5216] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" Nov 8 00:31:29.806229 containerd[1513]: 2025-11-08 00:31:29.764 [INFO][5216] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" iface="eth0" netns="" Nov 8 00:31:29.806229 containerd[1513]: 2025-11-08 00:31:29.764 [INFO][5216] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" Nov 8 00:31:29.806229 containerd[1513]: 2025-11-08 00:31:29.764 [INFO][5216] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" Nov 8 00:31:29.806229 containerd[1513]: 2025-11-08 00:31:29.790 [INFO][5223] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" HandleID="k8s-pod-network.1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" Workload="ci--4081--3--6--n--dcea41702a-k8s-goldmane--666569f655--p4b9r-eth0" Nov 8 00:31:29.806229 containerd[1513]: 2025-11-08 00:31:29.790 [INFO][5223] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:29.806229 containerd[1513]: 2025-11-08 00:31:29.790 [INFO][5223] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:29.806229 containerd[1513]: 2025-11-08 00:31:29.799 [WARNING][5223] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" HandleID="k8s-pod-network.1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" Workload="ci--4081--3--6--n--dcea41702a-k8s-goldmane--666569f655--p4b9r-eth0" Nov 8 00:31:29.806229 containerd[1513]: 2025-11-08 00:31:29.799 [INFO][5223] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" HandleID="k8s-pod-network.1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" Workload="ci--4081--3--6--n--dcea41702a-k8s-goldmane--666569f655--p4b9r-eth0" Nov 8 00:31:29.806229 containerd[1513]: 2025-11-08 00:31:29.801 [INFO][5223] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:29.806229 containerd[1513]: 2025-11-08 00:31:29.803 [INFO][5216] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" Nov 8 00:31:29.806229 containerd[1513]: time="2025-11-08T00:31:29.806047095Z" level=info msg="TearDown network for sandbox \"1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23\" successfully" Nov 8 00:31:29.806229 containerd[1513]: time="2025-11-08T00:31:29.806078103Z" level=info msg="StopPodSandbox for \"1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23\" returns successfully" Nov 8 00:31:29.806900 containerd[1513]: time="2025-11-08T00:31:29.806801413Z" level=info msg="RemovePodSandbox for \"1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23\"" Nov 8 00:31:29.806900 containerd[1513]: time="2025-11-08T00:31:29.806847068Z" level=info msg="Forcibly stopping sandbox \"1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23\"" Nov 8 00:31:29.907451 containerd[1513]: 2025-11-08 00:31:29.858 [WARNING][5237] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--dcea41702a-k8s-goldmane--666569f655--p4b9r-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"8b84b572-8061-4813-b462-3eea6f974bdd", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-dcea41702a", ContainerID:"09f967e40086090c315b9cc3de5e1734edbc52016c4ebe0d1de8efe71f4a5f9c", Pod:"goldmane-666569f655-p4b9r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.58.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4f62a7351ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:29.907451 containerd[1513]: 2025-11-08 00:31:29.858 [INFO][5237] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" Nov 8 00:31:29.907451 containerd[1513]: 2025-11-08 00:31:29.858 [INFO][5237] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" iface="eth0" netns="" Nov 8 00:31:29.907451 containerd[1513]: 2025-11-08 00:31:29.858 [INFO][5237] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" Nov 8 00:31:29.907451 containerd[1513]: 2025-11-08 00:31:29.858 [INFO][5237] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" Nov 8 00:31:29.907451 containerd[1513]: 2025-11-08 00:31:29.890 [INFO][5245] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" HandleID="k8s-pod-network.1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" Workload="ci--4081--3--6--n--dcea41702a-k8s-goldmane--666569f655--p4b9r-eth0" Nov 8 00:31:29.907451 containerd[1513]: 2025-11-08 00:31:29.890 [INFO][5245] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:29.907451 containerd[1513]: 2025-11-08 00:31:29.890 [INFO][5245] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:29.907451 containerd[1513]: 2025-11-08 00:31:29.898 [WARNING][5245] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" HandleID="k8s-pod-network.1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" Workload="ci--4081--3--6--n--dcea41702a-k8s-goldmane--666569f655--p4b9r-eth0" Nov 8 00:31:29.907451 containerd[1513]: 2025-11-08 00:31:29.898 [INFO][5245] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" HandleID="k8s-pod-network.1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" Workload="ci--4081--3--6--n--dcea41702a-k8s-goldmane--666569f655--p4b9r-eth0" Nov 8 00:31:29.907451 containerd[1513]: 2025-11-08 00:31:29.901 [INFO][5245] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:29.907451 containerd[1513]: 2025-11-08 00:31:29.903 [INFO][5237] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23" Nov 8 00:31:29.907451 containerd[1513]: time="2025-11-08T00:31:29.905895524Z" level=info msg="TearDown network for sandbox \"1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23\" successfully" Nov 8 00:31:29.912316 containerd[1513]: time="2025-11-08T00:31:29.912264422Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:31:29.912607 containerd[1513]: time="2025-11-08T00:31:29.912546129Z" level=info msg="RemovePodSandbox \"1a8c6491115f9063fab08a03380517cca04cd87dd340eb47ac766ee6f48abc23\" returns successfully" Nov 8 00:31:32.434464 kubelet[2635]: E1108 00:31:32.433827 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-989fcc88-m4sv7" podUID="b4091b12-a1d1-44f0-9f31-4fa77fdbf9a2" Nov 8 00:31:37.433650 kubelet[2635]: E1108 00:31:37.433583 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m4hcx" podUID="7a730453-478d-46fd-915f-5cbf5e28b105" Nov 8 00:31:39.434161 kubelet[2635]: E1108 00:31:39.434016 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c9474b6f-jbmnh" podUID="a375dcd8-3fbe-482e-815c-13c9165d26d2" Nov 8 00:31:39.435224 kubelet[2635]: E1108 00:31:39.434769 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p4b9r" podUID="8b84b572-8061-4813-b462-3eea6f974bdd" Nov 8 00:31:41.432666 kubelet[2635]: E1108 00:31:41.432615 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b858bf6c9-q5c9k" podUID="7bfc75bc-86f9-445a-9d64-08b33fc703e4" Nov 8 00:31:42.433158 kubelet[2635]: E1108 00:31:42.432225 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c9474b6f-brv5c" podUID="562850b7-c26f-461d-bfe8-c8a199e1bb8c" Nov 8 00:31:46.437070 containerd[1513]: time="2025-11-08T00:31:46.436735411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:31:46.888800 containerd[1513]: time="2025-11-08T00:31:46.888544467Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:46.890191 containerd[1513]: time="2025-11-08T00:31:46.890044942Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:31:46.890191 containerd[1513]: time="2025-11-08T00:31:46.890088022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:31:46.890367 kubelet[2635]: E1108 00:31:46.890320 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:31:46.890672 kubelet[2635]: E1108 00:31:46.890385 2635 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:31:46.890672 kubelet[2635]: E1108 00:31:46.890531 2635 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d7e33296a79e41f49551734f14da812b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dshgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-989fcc88-m4sv7_calico-system(b4091b12-a1d1-44f0-9f31-4fa77fdbf9a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:46.894100 containerd[1513]: time="2025-11-08T00:31:46.894048556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:31:47.335046 containerd[1513]: time="2025-11-08T00:31:47.334949308Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:47.337719 containerd[1513]: time="2025-11-08T00:31:47.337619350Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:31:47.337806 containerd[1513]: time="2025-11-08T00:31:47.337750886Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:31:47.338054 kubelet[2635]: E1108 00:31:47.337949 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:31:47.338152 kubelet[2635]: E1108 00:31:47.338053 2635 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:31:47.338323 kubelet[2635]: E1108 00:31:47.338243 2635 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dshgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-989fcc88-m4sv7_calico-system(b4091b12-a1d1-44f0-9f31-4fa77fdbf9a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:47.340130 kubelet[2635]: E1108 00:31:47.340052 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-989fcc88-m4sv7" podUID="b4091b12-a1d1-44f0-9f31-4fa77fdbf9a2" Nov 8 00:31:52.445690 containerd[1513]: time="2025-11-08T00:31:52.445303654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:31:52.901499 containerd[1513]: time="2025-11-08T00:31:52.899183833Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:52.901499 containerd[1513]: time="2025-11-08T00:31:52.901436385Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:31:52.901686 containerd[1513]: time="2025-11-08T00:31:52.901541131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:31:52.902125 kubelet[2635]: E1108 00:31:52.901782 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:31:52.902125 kubelet[2635]: E1108 00:31:52.901854 2635 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:31:52.904483 kubelet[2635]: E1108 00:31:52.902230 2635 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-595cr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-p4b9r_calico-system(8b84b572-8061-4813-b462-3eea6f974bdd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:52.904776 containerd[1513]: time="2025-11-08T00:31:52.904686813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:31:52.906379 kubelet[2635]: E1108 00:31:52.906294 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p4b9r" podUID="8b84b572-8061-4813-b462-3eea6f974bdd" Nov 8 00:31:53.346504 containerd[1513]: time="2025-11-08T00:31:53.346432896Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:53.350067 containerd[1513]: time="2025-11-08T00:31:53.349968868Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:31:53.350067 containerd[1513]: time="2025-11-08T00:31:53.350031234Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:31:53.350312 kubelet[2635]: E1108 00:31:53.350267 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:31:53.350369 kubelet[2635]: E1108 00:31:53.350328 2635 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:31:53.350685 kubelet[2635]: E1108 00:31:53.350630 2635 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-49q75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64c9474b6f-jbmnh_calico-apiserver(a375dcd8-3fbe-482e-815c-13c9165d26d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:53.351274 containerd[1513]: time="2025-11-08T00:31:53.351250184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:31:53.353039 kubelet[2635]: E1108 00:31:53.352805 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c9474b6f-jbmnh" podUID="a375dcd8-3fbe-482e-815c-13c9165d26d2" Nov 8 00:31:53.807755 containerd[1513]: time="2025-11-08T00:31:53.807472544Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:53.809958 containerd[1513]: time="2025-11-08T00:31:53.809787042Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:31:53.809958 containerd[1513]: time="2025-11-08T00:31:53.809841033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:31:53.810176 kubelet[2635]: E1108 00:31:53.810100 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:31:53.810259 kubelet[2635]: E1108 00:31:53.810178 2635 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:31:53.810454 kubelet[2635]: E1108 00:31:53.810355 2635 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4m2rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-m4hcx_calico-system(7a730453-478d-46fd-915f-5cbf5e28b105): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:53.814139 containerd[1513]: time="2025-11-08T00:31:53.814086812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:31:54.262742 containerd[1513]: time="2025-11-08T00:31:54.262405525Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:54.264829 containerd[1513]: time="2025-11-08T00:31:54.264701950Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:31:54.264829 containerd[1513]: time="2025-11-08T00:31:54.264746313Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:31:54.267422 kubelet[2635]: E1108 00:31:54.265379 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:31:54.267422 kubelet[2635]: E1108 00:31:54.265467 2635 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:31:54.267422 kubelet[2635]: E1108 00:31:54.265670 2635 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4m2rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-m4hcx_calico-system(7a730453-478d-46fd-915f-5cbf5e28b105): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:54.268644 kubelet[2635]: E1108 00:31:54.268026 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m4hcx" podUID="7a730453-478d-46fd-915f-5cbf5e28b105" Nov 8 00:31:54.438810 containerd[1513]: time="2025-11-08T00:31:54.438079129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:31:54.868932 containerd[1513]: time="2025-11-08T00:31:54.868826181Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:54.871156 containerd[1513]: time="2025-11-08T00:31:54.871076018Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:31:54.872434 containerd[1513]: time="2025-11-08T00:31:54.871200550Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:31:54.872498 kubelet[2635]: E1108 00:31:54.871487 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:31:54.872498 kubelet[2635]: E1108 00:31:54.871564 2635 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:31:54.872498 kubelet[2635]: E1108 00:31:54.871799 2635 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlzmd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64c9474b6f-brv5c_calico-apiserver(562850b7-c26f-461d-bfe8-c8a199e1bb8c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:54.873519 kubelet[2635]: E1108 00:31:54.873487 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c9474b6f-brv5c" podUID="562850b7-c26f-461d-bfe8-c8a199e1bb8c" Nov 8 00:31:56.441431 containerd[1513]: time="2025-11-08T00:31:56.440946606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:31:56.872069 containerd[1513]: time="2025-11-08T00:31:56.870604102Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:56.872522 containerd[1513]: time="2025-11-08T00:31:56.872431989Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:31:56.872940 containerd[1513]: time="2025-11-08T00:31:56.872591057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:31:56.872975 kubelet[2635]: E1108 00:31:56.872849 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:31:56.872975 kubelet[2635]: E1108 00:31:56.872928 2635 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:31:56.873564 kubelet[2635]: E1108 00:31:56.873096 2635 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n49w7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5b858bf6c9-q5c9k_calico-system(7bfc75bc-86f9-445a-9d64-08b33fc703e4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:56.874957 kubelet[2635]: E1108 00:31:56.874689 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b858bf6c9-q5c9k" podUID="7bfc75bc-86f9-445a-9d64-08b33fc703e4" Nov 8 00:32:01.435193 kubelet[2635]: E1108 00:32:01.435082 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-989fcc88-m4sv7" podUID="b4091b12-a1d1-44f0-9f31-4fa77fdbf9a2" Nov 8 00:32:03.435712 kubelet[2635]: E1108 00:32:03.435177 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c9474b6f-jbmnh" podUID="a375dcd8-3fbe-482e-815c-13c9165d26d2" Nov 8 00:32:04.165335 systemd[1]: Started sshd@7-46.62.239.97:22-147.75.109.163:60108.service - OpenSSH per-connection server daemon (147.75.109.163:60108). Nov 8 00:32:04.821701 systemd[1]: run-containerd-runc-k8s.io-7f1ed33d51332844a6ec73571734552808c726b1be96df876bb95160fb50c88b-runc.VzReXX.mount: Deactivated successfully. Nov 8 00:32:05.231580 sshd[5292]: Accepted publickey for core from 147.75.109.163 port 60108 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:32:05.233430 sshd[5292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:05.246399 systemd-logind[1477]: New session 8 of user core. Nov 8 00:32:05.253768 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:32:06.434572 kubelet[2635]: E1108 00:32:06.434520 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p4b9r" podUID="8b84b572-8061-4813-b462-3eea6f974bdd" Nov 8 00:32:06.696216 sshd[5292]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:06.701913 systemd[1]: sshd@7-46.62.239.97:22-147.75.109.163:60108.service: Deactivated successfully. Nov 8 00:32:06.705212 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:32:06.709788 systemd-logind[1477]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:32:06.712678 systemd-logind[1477]: Removed session 8. Nov 8 00:32:07.432201 kubelet[2635]: E1108 00:32:07.431816 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c9474b6f-brv5c" podUID="562850b7-c26f-461d-bfe8-c8a199e1bb8c" Nov 8 00:32:09.432000 kubelet[2635]: E1108 00:32:09.431926 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b858bf6c9-q5c9k" podUID="7bfc75bc-86f9-445a-9d64-08b33fc703e4" Nov 8 00:32:09.435190 kubelet[2635]: E1108 00:32:09.435103 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m4hcx" podUID="7a730453-478d-46fd-915f-5cbf5e28b105" Nov 8 00:32:11.911878 systemd[1]: Started sshd@8-46.62.239.97:22-147.75.109.163:56946.service - OpenSSH per-connection server daemon (147.75.109.163:56946). Nov 8 00:32:13.050115 sshd[5336]: Accepted publickey for core from 147.75.109.163 port 56946 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:32:13.051811 sshd[5336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:13.058821 systemd-logind[1477]: New session 9 of user core. Nov 8 00:32:13.067581 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:32:13.927136 sshd[5336]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:13.932499 systemd[1]: sshd@8-46.62.239.97:22-147.75.109.163:56946.service: Deactivated successfully. Nov 8 00:32:13.937016 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:32:13.938794 systemd-logind[1477]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:32:13.940950 systemd-logind[1477]: Removed session 9. Nov 8 00:32:14.097660 systemd[1]: Started sshd@9-46.62.239.97:22-147.75.109.163:56956.service - OpenSSH per-connection server daemon (147.75.109.163:56956). Nov 8 00:32:15.124309 sshd[5350]: Accepted publickey for core from 147.75.109.163 port 56956 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:32:15.126750 sshd[5350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:15.137506 systemd-logind[1477]: New session 10 of user core. Nov 8 00:32:15.142619 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:32:15.435724 kubelet[2635]: E1108 00:32:15.435290 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-989fcc88-m4sv7" podUID="b4091b12-a1d1-44f0-9f31-4fa77fdbf9a2" Nov 8 00:32:15.952678 sshd[5350]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:15.958624 systemd-logind[1477]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:32:15.958919 systemd[1]: sshd@9-46.62.239.97:22-147.75.109.163:56956.service: Deactivated successfully. Nov 8 00:32:15.964518 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:32:15.966201 systemd-logind[1477]: Removed session 10. Nov 8 00:32:16.130318 systemd[1]: Started sshd@10-46.62.239.97:22-147.75.109.163:56970.service - OpenSSH per-connection server daemon (147.75.109.163:56970). Nov 8 00:32:17.136078 sshd[5361]: Accepted publickey for core from 147.75.109.163 port 56970 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:32:17.138678 sshd[5361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:17.143035 systemd-logind[1477]: New session 11 of user core. Nov 8 00:32:17.149599 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:32:17.432170 kubelet[2635]: E1108 00:32:17.432012 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c9474b6f-jbmnh" podUID="a375dcd8-3fbe-482e-815c-13c9165d26d2" Nov 8 00:32:18.004630 sshd[5361]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:18.011688 systemd[1]: sshd@10-46.62.239.97:22-147.75.109.163:56970.service: Deactivated successfully. Nov 8 00:32:18.013950 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:32:18.015529 systemd-logind[1477]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:32:18.017134 systemd-logind[1477]: Removed session 11. Nov 8 00:32:18.438543 kubelet[2635]: E1108 00:32:18.437905 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p4b9r" podUID="8b84b572-8061-4813-b462-3eea6f974bdd" Nov 8 00:32:19.433455 kubelet[2635]: E1108 00:32:19.432227 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c9474b6f-brv5c" podUID="562850b7-c26f-461d-bfe8-c8a199e1bb8c" Nov 8 00:32:20.436193 kubelet[2635]: E1108 00:32:20.436131 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m4hcx" podUID="7a730453-478d-46fd-915f-5cbf5e28b105" Nov 8 00:32:23.216020 systemd[1]: Started sshd@11-46.62.239.97:22-147.75.109.163:38316.service - OpenSSH per-connection server daemon (147.75.109.163:38316). Nov 8 00:32:23.434083 kubelet[2635]: E1108 00:32:23.433727 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b858bf6c9-q5c9k" podUID="7bfc75bc-86f9-445a-9d64-08b33fc703e4" Nov 8 00:32:24.351479 sshd[5380]: Accepted publickey for core from 147.75.109.163 port 38316 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:32:24.355328 sshd[5380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:24.362562 systemd-logind[1477]: New session 12 of user core. Nov 8 00:32:24.368542 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:32:25.220288 sshd[5380]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:25.224961 systemd[1]: sshd@11-46.62.239.97:22-147.75.109.163:38316.service: Deactivated successfully. Nov 8 00:32:25.229700 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:32:25.231110 systemd-logind[1477]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:32:25.232235 systemd-logind[1477]: Removed session 12. Nov 8 00:32:29.432005 containerd[1513]: time="2025-11-08T00:32:29.431771665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:32:29.860096 containerd[1513]: time="2025-11-08T00:32:29.860046363Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:29.862353 containerd[1513]: time="2025-11-08T00:32:29.862156533Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:32:29.862353 containerd[1513]: time="2025-11-08T00:32:29.862265988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:32:29.863336 kubelet[2635]: E1108 00:32:29.862847 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:32:29.863336 kubelet[2635]: E1108 00:32:29.862949 2635 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:32:29.864284 kubelet[2635]: E1108 00:32:29.863139 2635 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d7e33296a79e41f49551734f14da812b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dshgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-989fcc88-m4sv7_calico-system(b4091b12-a1d1-44f0-9f31-4fa77fdbf9a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:29.866670 containerd[1513]: time="2025-11-08T00:32:29.866370190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:32:30.299953 containerd[1513]: time="2025-11-08T00:32:30.299678838Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:30.302069 containerd[1513]: time="2025-11-08T00:32:30.301879598Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:32:30.302069 containerd[1513]: time="2025-11-08T00:32:30.302015432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:32:30.303328 kubelet[2635]: E1108 00:32:30.302395 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:32:30.303328 kubelet[2635]: E1108 00:32:30.302517 2635 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:32:30.303328 kubelet[2635]: E1108 00:32:30.302742 2635 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dshgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-989fcc88-m4sv7_calico-system(b4091b12-a1d1-44f0-9f31-4fa77fdbf9a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:30.304569 kubelet[2635]: E1108 00:32:30.304519 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-989fcc88-m4sv7" podUID="b4091b12-a1d1-44f0-9f31-4fa77fdbf9a2" Nov 8 00:32:30.382832 systemd[1]: Started sshd@12-46.62.239.97:22-147.75.109.163:46894.service - OpenSSH per-connection server daemon (147.75.109.163:46894). Nov 8 00:32:30.432837 kubelet[2635]: E1108 00:32:30.431681 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p4b9r" podUID="8b84b572-8061-4813-b462-3eea6f974bdd" Nov 8 00:32:31.402180 sshd[5401]: Accepted publickey for core from 147.75.109.163 port 46894 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:32:31.405490 sshd[5401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:31.413478 systemd-logind[1477]: New session 13 of user core. Nov 8 00:32:31.419599 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:32:31.433530 kubelet[2635]: E1108 00:32:31.432205 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c9474b6f-brv5c" podUID="562850b7-c26f-461d-bfe8-c8a199e1bb8c" Nov 8 00:32:31.433530 kubelet[2635]: E1108 00:32:31.432764 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c9474b6f-jbmnh" podUID="a375dcd8-3fbe-482e-815c-13c9165d26d2" Nov 8 00:32:32.244551 sshd[5401]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:32.248687 systemd[1]: sshd@12-46.62.239.97:22-147.75.109.163:46894.service: Deactivated successfully. Nov 8 00:32:32.251576 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:32:32.252776 systemd-logind[1477]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:32:32.254126 systemd-logind[1477]: Removed session 13. Nov 8 00:32:32.438053 kubelet[2635]: E1108 00:32:32.437871 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m4hcx" podUID="7a730453-478d-46fd-915f-5cbf5e28b105" Nov 8 00:32:37.422920 systemd[1]: Started sshd@13-46.62.239.97:22-147.75.109.163:46900.service - OpenSSH per-connection server daemon (147.75.109.163:46900). Nov 8 00:32:37.433495 containerd[1513]: time="2025-11-08T00:32:37.433455656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:32:37.860378 containerd[1513]: time="2025-11-08T00:32:37.860282889Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:37.862799 containerd[1513]: time="2025-11-08T00:32:37.862650510Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:32:37.862799 containerd[1513]: time="2025-11-08T00:32:37.862749496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:32:37.862993 kubelet[2635]: E1108 00:32:37.862933 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:32:37.863659 kubelet[2635]: E1108 00:32:37.863000 2635 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:32:37.863659 kubelet[2635]: E1108 00:32:37.863153 2635 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n49w7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5b858bf6c9-q5c9k_calico-system(7bfc75bc-86f9-445a-9d64-08b33fc703e4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:37.865038 kubelet[2635]: E1108 00:32:37.864964 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b858bf6c9-q5c9k" podUID="7bfc75bc-86f9-445a-9d64-08b33fc703e4" Nov 8 00:32:38.417658 sshd[5452]: Accepted publickey for core from 147.75.109.163 port 46900 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:32:38.420825 sshd[5452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:38.433133 systemd-logind[1477]: New session 14 of user core. Nov 8 00:32:38.439029 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:32:39.265654 sshd[5452]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:39.270255 systemd-logind[1477]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:32:39.272988 systemd[1]: sshd@13-46.62.239.97:22-147.75.109.163:46900.service: Deactivated successfully. Nov 8 00:32:39.276150 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:32:39.279365 systemd-logind[1477]: Removed session 14. Nov 8 00:32:39.443880 systemd[1]: Started sshd@14-46.62.239.97:22-147.75.109.163:46908.service - OpenSSH per-connection server daemon (147.75.109.163:46908). Nov 8 00:32:40.462016 sshd[5472]: Accepted publickey for core from 147.75.109.163 port 46908 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:32:40.464788 sshd[5472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:40.469317 systemd-logind[1477]: New session 15 of user core. Nov 8 00:32:40.476704 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:32:41.504655 sshd[5472]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:41.509790 systemd-logind[1477]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:32:41.510966 systemd[1]: sshd@14-46.62.239.97:22-147.75.109.163:46908.service: Deactivated successfully. Nov 8 00:32:41.513986 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:32:41.520122 systemd-logind[1477]: Removed session 15. Nov 8 00:32:41.681802 systemd[1]: Started sshd@15-46.62.239.97:22-147.75.109.163:44028.service - OpenSSH per-connection server daemon (147.75.109.163:44028). Nov 8 00:32:42.437448 containerd[1513]: time="2025-11-08T00:32:42.435396560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:32:42.441436 kubelet[2635]: E1108 00:32:42.439253 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-989fcc88-m4sv7" podUID="b4091b12-a1d1-44f0-9f31-4fa77fdbf9a2" Nov 8 00:32:42.690643 sshd[5483]: Accepted publickey for core from 147.75.109.163 port 44028 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:32:42.693532 sshd[5483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:42.699365 systemd-logind[1477]: New session 16 of user core. Nov 8 00:32:42.704821 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:32:42.870786 containerd[1513]: time="2025-11-08T00:32:42.870527210Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:42.872590 containerd[1513]: time="2025-11-08T00:32:42.872389327Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:32:42.872590 containerd[1513]: time="2025-11-08T00:32:42.872509292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:32:42.874188 kubelet[2635]: E1108 00:32:42.872865 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:32:42.874188 kubelet[2635]: E1108 00:32:42.872929 2635 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:32:42.874188 kubelet[2635]: E1108 00:32:42.873232 2635 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-49q75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64c9474b6f-jbmnh_calico-apiserver(a375dcd8-3fbe-482e-815c-13c9165d26d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:42.874429 containerd[1513]: time="2025-11-08T00:32:42.873943997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:32:42.874634 kubelet[2635]: E1108 00:32:42.874608 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c9474b6f-jbmnh" podUID="a375dcd8-3fbe-482e-815c-13c9165d26d2" Nov 8 00:32:43.303194 containerd[1513]: time="2025-11-08T00:32:43.302978160Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:43.304735 containerd[1513]: time="2025-11-08T00:32:43.304527791Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:32:43.305155 containerd[1513]: time="2025-11-08T00:32:43.305020364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:32:43.305450 kubelet[2635]: E1108 00:32:43.305248 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:32:43.305710 kubelet[2635]: E1108 00:32:43.305588 2635 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:32:43.308711 containerd[1513]: time="2025-11-08T00:32:43.306856070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:32:43.308932 kubelet[2635]: E1108 00:32:43.308676 2635 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-595cr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-p4b9r_calico-system(8b84b572-8061-4813-b462-3eea6f974bdd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:43.310811 kubelet[2635]: E1108 00:32:43.310740 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p4b9r" podUID="8b84b572-8061-4813-b462-3eea6f974bdd" Nov 8 00:32:43.768348 containerd[1513]: time="2025-11-08T00:32:43.768271626Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:43.772457 containerd[1513]: time="2025-11-08T00:32:43.771573588Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:32:43.772457 containerd[1513]: time="2025-11-08T00:32:43.771659549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:32:43.792616 kubelet[2635]: E1108 00:32:43.792496 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:32:43.793045 kubelet[2635]: E1108 00:32:43.792634 2635 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:32:43.793045 kubelet[2635]: E1108 00:32:43.792842 2635 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlzmd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64c9474b6f-brv5c_calico-apiserver(562850b7-c26f-461d-bfe8-c8a199e1bb8c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:43.794498 kubelet[2635]: E1108 00:32:43.794462 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c9474b6f-brv5c" podUID="562850b7-c26f-461d-bfe8-c8a199e1bb8c" Nov 8 00:32:44.350960 sshd[5483]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:44.361352 systemd[1]: sshd@15-46.62.239.97:22-147.75.109.163:44028.service: Deactivated successfully. Nov 8 00:32:44.368686 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:32:44.371322 systemd-logind[1477]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:32:44.375653 systemd-logind[1477]: Removed session 16. Nov 8 00:32:44.553850 systemd[1]: Started sshd@16-46.62.239.97:22-147.75.109.163:44030.service - OpenSSH per-connection server daemon (147.75.109.163:44030). Nov 8 00:32:45.680603 sshd[5502]: Accepted publickey for core from 147.75.109.163 port 44030 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:32:45.682966 sshd[5502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:45.692515 systemd-logind[1477]: New session 17 of user core. Nov 8 00:32:45.694582 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:32:46.434888 containerd[1513]: time="2025-11-08T00:32:46.434825010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:32:46.732653 sshd[5502]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:46.736445 systemd[1]: sshd@16-46.62.239.97:22-147.75.109.163:44030.service: Deactivated successfully. Nov 8 00:32:46.739080 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:32:46.740981 systemd-logind[1477]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:32:46.742509 systemd-logind[1477]: Removed session 17. Nov 8 00:32:46.868286 containerd[1513]: time="2025-11-08T00:32:46.868211445Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:46.870491 containerd[1513]: time="2025-11-08T00:32:46.870432864Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:32:46.870650 containerd[1513]: time="2025-11-08T00:32:46.870599706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:32:46.872650 kubelet[2635]: E1108 00:32:46.872566 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:32:46.872650 kubelet[2635]: E1108 00:32:46.872630 2635 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:32:46.874320 kubelet[2635]: E1108 00:32:46.872790 2635 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4m2rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-m4hcx_calico-system(7a730453-478d-46fd-915f-5cbf5e28b105): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:46.874967 containerd[1513]: time="2025-11-08T00:32:46.874926757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:32:46.896750 systemd[1]: Started sshd@17-46.62.239.97:22-147.75.109.163:44038.service - OpenSSH per-connection server daemon (147.75.109.163:44038). Nov 8 00:32:47.304210 containerd[1513]: time="2025-11-08T00:32:47.304018517Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:47.306193 containerd[1513]: time="2025-11-08T00:32:47.306045242Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:32:47.307341 containerd[1513]: time="2025-11-08T00:32:47.306092230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:32:47.307388 kubelet[2635]: E1108 00:32:47.306733 2635 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:32:47.307388 kubelet[2635]: E1108 00:32:47.306802 2635 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:32:47.307388 kubelet[2635]: E1108 00:32:47.306951 2635 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4m2rp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-m4hcx_calico-system(7a730453-478d-46fd-915f-5cbf5e28b105): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:47.308341 kubelet[2635]: E1108 00:32:47.308246 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m4hcx" podUID="7a730453-478d-46fd-915f-5cbf5e28b105" Nov 8 00:32:47.912095 sshd[5512]: Accepted publickey for core from 147.75.109.163 port 44038 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:32:47.914949 sshd[5512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:47.920053 systemd-logind[1477]: New session 18 of user core. Nov 8 00:32:47.927776 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:32:48.748322 sshd[5512]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:48.754566 systemd-logind[1477]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:32:48.756819 systemd[1]: sshd@17-46.62.239.97:22-147.75.109.163:44038.service: Deactivated successfully. Nov 8 00:32:48.761329 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:32:48.764365 systemd-logind[1477]: Removed session 18. Nov 8 00:32:49.438308 kubelet[2635]: E1108 00:32:49.435370 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b858bf6c9-q5c9k" podUID="7bfc75bc-86f9-445a-9d64-08b33fc703e4" Nov 8 00:32:53.922514 systemd[1]: Started sshd@18-46.62.239.97:22-147.75.109.163:45726.service - OpenSSH per-connection server daemon (147.75.109.163:45726). Nov 8 00:32:54.946507 sshd[5528]: Accepted publickey for core from 147.75.109.163 port 45726 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:32:54.948785 sshd[5528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:54.954031 systemd-logind[1477]: New session 19 of user core. Nov 8 00:32:54.960649 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:32:55.435936 kubelet[2635]: E1108 00:32:55.435797 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-989fcc88-m4sv7" podUID="b4091b12-a1d1-44f0-9f31-4fa77fdbf9a2" Nov 8 00:32:55.763293 sshd[5528]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:55.770405 systemd-logind[1477]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:32:55.772962 systemd[1]: sshd@18-46.62.239.97:22-147.75.109.163:45726.service: Deactivated successfully. Nov 8 00:32:55.781817 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:32:55.787651 systemd-logind[1477]: Removed session 19. Nov 8 00:32:57.433773 kubelet[2635]: E1108 00:32:57.433520 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p4b9r" podUID="8b84b572-8061-4813-b462-3eea6f974bdd" Nov 8 00:32:57.433773 kubelet[2635]: E1108 00:32:57.433581 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c9474b6f-brv5c" podUID="562850b7-c26f-461d-bfe8-c8a199e1bb8c" Nov 8 00:32:57.433773 kubelet[2635]: E1108 00:32:57.433708 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c9474b6f-jbmnh" podUID="a375dcd8-3fbe-482e-815c-13c9165d26d2" Nov 8 00:32:58.439454 kubelet[2635]: E1108 00:32:58.438740 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m4hcx" podUID="7a730453-478d-46fd-915f-5cbf5e28b105" Nov 8 00:33:00.939720 systemd[1]: Started sshd@19-46.62.239.97:22-147.75.109.163:46566.service - OpenSSH per-connection server daemon (147.75.109.163:46566). Nov 8 00:33:01.959854 sshd[5541]: Accepted publickey for core from 147.75.109.163 port 46566 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:33:01.963061 sshd[5541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:33:01.970762 systemd-logind[1477]: New session 20 of user core. Nov 8 00:33:01.977659 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 00:33:02.786391 sshd[5541]: pam_unix(sshd:session): session closed for user core Nov 8 00:33:02.792170 systemd-logind[1477]: Session 20 logged out. Waiting for processes to exit. Nov 8 00:33:02.793652 systemd[1]: sshd@19-46.62.239.97:22-147.75.109.163:46566.service: Deactivated successfully. Nov 8 00:33:02.798242 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 00:33:02.803397 systemd-logind[1477]: Removed session 20. Nov 8 00:33:03.433542 kubelet[2635]: E1108 00:33:03.433463 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b858bf6c9-q5c9k" podUID="7bfc75bc-86f9-445a-9d64-08b33fc703e4" Nov 8 00:33:04.813180 systemd[1]: run-containerd-runc-k8s.io-7f1ed33d51332844a6ec73571734552808c726b1be96df876bb95160fb50c88b-runc.DFBxJS.mount: Deactivated successfully. Nov 8 00:33:08.438816 kubelet[2635]: E1108 00:33:08.438761 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c9474b6f-jbmnh" podUID="a375dcd8-3fbe-482e-815c-13c9165d26d2" Nov 8 00:33:08.439754 kubelet[2635]: E1108 00:33:08.439678 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-989fcc88-m4sv7" podUID="b4091b12-a1d1-44f0-9f31-4fa77fdbf9a2" Nov 8 00:33:09.433718 kubelet[2635]: E1108 00:33:09.433656 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m4hcx" podUID="7a730453-478d-46fd-915f-5cbf5e28b105" Nov 8 00:33:11.432648 kubelet[2635]: E1108 00:33:11.432595 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c9474b6f-brv5c" podUID="562850b7-c26f-461d-bfe8-c8a199e1bb8c" Nov 8 00:33:11.433475 kubelet[2635]: E1108 00:33:11.433166 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p4b9r" podUID="8b84b572-8061-4813-b462-3eea6f974bdd" Nov 8 00:33:18.433603 kubelet[2635]: E1108 00:33:18.433049 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b858bf6c9-q5c9k" podUID="7bfc75bc-86f9-445a-9d64-08b33fc703e4" Nov 8 00:33:20.438181 kubelet[2635]: E1108 00:33:20.438129 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c9474b6f-jbmnh" podUID="a375dcd8-3fbe-482e-815c-13c9165d26d2" Nov 8 00:33:20.441608 kubelet[2635]: E1108 00:33:20.441555 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m4hcx" podUID="7a730453-478d-46fd-915f-5cbf5e28b105" Nov 8 00:33:23.433006 kubelet[2635]: E1108 00:33:23.432921 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c9474b6f-brv5c" podUID="562850b7-c26f-461d-bfe8-c8a199e1bb8c" Nov 8 00:33:23.433624 kubelet[2635]: E1108 00:33:23.433553 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-989fcc88-m4sv7" podUID="b4091b12-a1d1-44f0-9f31-4fa77fdbf9a2" Nov 8 00:33:23.436617 kubelet[2635]: E1108 00:33:23.434611 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p4b9r" podUID="8b84b572-8061-4813-b462-3eea6f974bdd" Nov 8 00:33:31.432371 kubelet[2635]: E1108 00:33:31.432275 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b858bf6c9-q5c9k" podUID="7bfc75bc-86f9-445a-9d64-08b33fc703e4" Nov 8 00:33:33.436445 kubelet[2635]: E1108 00:33:33.436335 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-m4hcx" podUID="7a730453-478d-46fd-915f-5cbf5e28b105" Nov 8 00:33:34.433223 kubelet[2635]: E1108 00:33:34.433158 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c9474b6f-jbmnh" podUID="a375dcd8-3fbe-482e-815c-13c9165d26d2" Nov 8 00:33:34.433775 kubelet[2635]: E1108 00:33:34.433720 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-989fcc88-m4sv7" podUID="b4091b12-a1d1-44f0-9f31-4fa77fdbf9a2" Nov 8 00:33:35.432678 kubelet[2635]: E1108 00:33:35.432607 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64c9474b6f-brv5c" podUID="562850b7-c26f-461d-bfe8-c8a199e1bb8c" Nov 8 00:33:35.722407 systemd[1]: cri-containerd-4a170432b0f97442a20012a9c31b3b795bd12636914e1e6ce432780a2c1c8321.scope: Deactivated successfully. Nov 8 00:33:35.724658 systemd[1]: cri-containerd-4a170432b0f97442a20012a9c31b3b795bd12636914e1e6ce432780a2c1c8321.scope: Consumed 26.817s CPU time. Nov 8 00:33:35.951083 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a170432b0f97442a20012a9c31b3b795bd12636914e1e6ce432780a2c1c8321-rootfs.mount: Deactivated successfully. Nov 8 00:33:36.045833 containerd[1513]: time="2025-11-08T00:33:35.966209270Z" level=info msg="shim disconnected" id=4a170432b0f97442a20012a9c31b3b795bd12636914e1e6ce432780a2c1c8321 namespace=k8s.io Nov 8 00:33:36.064447 containerd[1513]: time="2025-11-08T00:33:36.064355305Z" level=warning msg="cleaning up after shim disconnected" id=4a170432b0f97442a20012a9c31b3b795bd12636914e1e6ce432780a2c1c8321 namespace=k8s.io Nov 8 00:33:36.064447 containerd[1513]: time="2025-11-08T00:33:36.064444944Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:33:36.207709 kubelet[2635]: E1108 00:33:36.207642 2635 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:36648->10.0.0.2:2379: read: connection timed out" Nov 8 00:33:36.373185 systemd[1]: cri-containerd-da8e36063db927b80a105db9203083c2c3c5dcd9f67ff5dc24affecc2c5bd2b6.scope: Deactivated successfully. Nov 8 00:33:36.375279 systemd[1]: cri-containerd-da8e36063db927b80a105db9203083c2c3c5dcd9f67ff5dc24affecc2c5bd2b6.scope: Consumed 5.652s CPU time, 24.5M memory peak, 0B memory swap peak. Nov 8 00:33:36.391967 kubelet[2635]: I1108 00:33:36.391922 2635 scope.go:117] "RemoveContainer" containerID="4a170432b0f97442a20012a9c31b3b795bd12636914e1e6ce432780a2c1c8321" Nov 8 00:33:36.431480 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da8e36063db927b80a105db9203083c2c3c5dcd9f67ff5dc24affecc2c5bd2b6-rootfs.mount: Deactivated successfully. Nov 8 00:33:36.434980 containerd[1513]: time="2025-11-08T00:33:36.434898079Z" level=info msg="shim disconnected" id=da8e36063db927b80a105db9203083c2c3c5dcd9f67ff5dc24affecc2c5bd2b6 namespace=k8s.io Nov 8 00:33:36.435492 containerd[1513]: time="2025-11-08T00:33:36.435216738Z" level=warning msg="cleaning up after shim disconnected" id=da8e36063db927b80a105db9203083c2c3c5dcd9f67ff5dc24affecc2c5bd2b6 namespace=k8s.io Nov 8 00:33:36.435492 containerd[1513]: time="2025-11-08T00:33:36.435244019Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:33:36.436690 containerd[1513]: time="2025-11-08T00:33:36.436450430Z" level=info msg="CreateContainer within sandbox \"7c9c0a364c9f7bc6bbf9ba5cc8983ad68db7d2f567802292607a0a24c14e3250\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 8 00:33:36.543673 containerd[1513]: time="2025-11-08T00:33:36.543619348Z" level=info msg="CreateContainer within sandbox \"7c9c0a364c9f7bc6bbf9ba5cc8983ad68db7d2f567802292607a0a24c14e3250\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"d4171323ee761a901a491551bc0fd8e494ab3a69c6b8ceb787a68cac5f727d5e\"" Nov 8 00:33:36.544610 containerd[1513]: time="2025-11-08T00:33:36.544581360Z" level=info msg="StartContainer for \"d4171323ee761a901a491551bc0fd8e494ab3a69c6b8ceb787a68cac5f727d5e\"" Nov 8 00:33:36.614572 systemd[1]: Started cri-containerd-d4171323ee761a901a491551bc0fd8e494ab3a69c6b8ceb787a68cac5f727d5e.scope - libcontainer container d4171323ee761a901a491551bc0fd8e494ab3a69c6b8ceb787a68cac5f727d5e. Nov 8 00:33:36.659498 containerd[1513]: time="2025-11-08T00:33:36.658650949Z" level=info msg="StartContainer for \"d4171323ee761a901a491551bc0fd8e494ab3a69c6b8ceb787a68cac5f727d5e\" returns successfully" Nov 8 00:33:37.362736 kubelet[2635]: I1108 00:33:37.362650 2635 scope.go:117] "RemoveContainer" containerID="da8e36063db927b80a105db9203083c2c3c5dcd9f67ff5dc24affecc2c5bd2b6" Nov 8 00:33:37.365156 containerd[1513]: time="2025-11-08T00:33:37.365092635Z" level=info msg="CreateContainer within sandbox \"bd4b1c431457570025c28a81a1dff3f65de4098c112bfd6bbc4f6f52f97fb8b6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 8 00:33:37.415302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3580938184.mount: Deactivated successfully. Nov 8 00:33:37.418677 containerd[1513]: time="2025-11-08T00:33:37.417329675Z" level=info msg="CreateContainer within sandbox \"bd4b1c431457570025c28a81a1dff3f65de4098c112bfd6bbc4f6f52f97fb8b6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e58edd46c48003ad4b6fcf0dd7561553feee5d12c686468e3bc363efe407025a\"" Nov 8 00:33:37.418974 containerd[1513]: time="2025-11-08T00:33:37.418930289Z" level=info msg="StartContainer for \"e58edd46c48003ad4b6fcf0dd7561553feee5d12c686468e3bc363efe407025a\"" Nov 8 00:33:37.433274 kubelet[2635]: E1108 00:33:37.433209 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p4b9r" podUID="8b84b572-8061-4813-b462-3eea6f974bdd" Nov 8 00:33:37.456691 systemd[1]: Started cri-containerd-e58edd46c48003ad4b6fcf0dd7561553feee5d12c686468e3bc363efe407025a.scope - libcontainer container e58edd46c48003ad4b6fcf0dd7561553feee5d12c686468e3bc363efe407025a. Nov 8 00:33:37.504907 containerd[1513]: time="2025-11-08T00:33:37.504852324Z" level=info msg="StartContainer for \"e58edd46c48003ad4b6fcf0dd7561553feee5d12c686468e3bc363efe407025a\" returns successfully" Nov 8 00:33:39.851943 kubelet[2635]: E1108 00:33:39.832017 2635 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:36474->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-6-n-dcea41702a.1875e0cf1eb37654 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-6-n-dcea41702a,UID:f2746dbe9cd2a311acb13c57a2d1d5f4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-dcea41702a,},FirstTimestamp:2025-11-08 00:33:29.363347028 +0000 UTC m=+181.101842931,LastTimestamp:2025-11-08 00:33:29.363347028 +0000 UTC m=+181.101842931,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-dcea41702a,}" Nov 8 00:33:41.305520 systemd[1]: cri-containerd-3f5f8d9b13c47ca392a1e6e479eaff47e391d0adc32db9f03833179a8ae5c49f.scope: Deactivated successfully. Nov 8 00:33:41.307034 systemd[1]: cri-containerd-3f5f8d9b13c47ca392a1e6e479eaff47e391d0adc32db9f03833179a8ae5c49f.scope: Consumed 2.809s CPU time, 24.5M memory peak, 0B memory swap peak. Nov 8 00:33:41.351273 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f5f8d9b13c47ca392a1e6e479eaff47e391d0adc32db9f03833179a8ae5c49f-rootfs.mount: Deactivated successfully. Nov 8 00:33:41.368227 containerd[1513]: time="2025-11-08T00:33:41.368162287Z" level=info msg="shim disconnected" id=3f5f8d9b13c47ca392a1e6e479eaff47e391d0adc32db9f03833179a8ae5c49f namespace=k8s.io Nov 8 00:33:41.368734 containerd[1513]: time="2025-11-08T00:33:41.368666386Z" level=warning msg="cleaning up after shim disconnected" id=3f5f8d9b13c47ca392a1e6e479eaff47e391d0adc32db9f03833179a8ae5c49f namespace=k8s.io Nov 8 00:33:41.368734 containerd[1513]: time="2025-11-08T00:33:41.368688669Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:33:41.433195 kubelet[2635]: I1108 00:33:41.432840 2635 scope.go:117] "RemoveContainer" containerID="3f5f8d9b13c47ca392a1e6e479eaff47e391d0adc32db9f03833179a8ae5c49f" Nov 8 00:33:41.435029 containerd[1513]: time="2025-11-08T00:33:41.434989452Z" level=info msg="CreateContainer within sandbox \"cc4dc884f230fe0a74ababf82bd76d9eb417094bf71c470be96e6d15e0e54cb4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 8 00:33:41.462959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2666433593.mount: Deactivated successfully. Nov 8 00:33:41.463772 containerd[1513]: time="2025-11-08T00:33:41.463249180Z" level=info msg="CreateContainer within sandbox \"cc4dc884f230fe0a74ababf82bd76d9eb417094bf71c470be96e6d15e0e54cb4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"4c761b6689f514bbf81fe5de28bb381c834545197ee504826f734d5ba41d1bfa\"" Nov 8 00:33:41.465660 containerd[1513]: time="2025-11-08T00:33:41.465632565Z" level=info msg="StartContainer for \"4c761b6689f514bbf81fe5de28bb381c834545197ee504826f734d5ba41d1bfa\"" Nov 8 00:33:41.517620 systemd[1]: Started cri-containerd-4c761b6689f514bbf81fe5de28bb381c834545197ee504826f734d5ba41d1bfa.scope - libcontainer container 4c761b6689f514bbf81fe5de28bb381c834545197ee504826f734d5ba41d1bfa. Nov 8 00:33:41.580293 containerd[1513]: time="2025-11-08T00:33:41.580076789Z" level=info msg="StartContainer for \"4c761b6689f514bbf81fe5de28bb381c834545197ee504826f734d5ba41d1bfa\" returns successfully" Nov 8 00:33:42.357472 systemd[1]: run-containerd-runc-k8s.io-4c761b6689f514bbf81fe5de28bb381c834545197ee504826f734d5ba41d1bfa-runc.2XFNLR.mount: Deactivated successfully. Nov 8 00:33:43.431971 kubelet[2635]: E1108 00:33:43.431908 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5b858bf6c9-q5c9k" podUID="7bfc75bc-86f9-445a-9d64-08b33fc703e4"