Nov 4 23:55:06.074438 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 4 22:00:22 -00 2025 Nov 4 23:55:06.074463 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:55:06.075187 kernel: BIOS-provided physical RAM map: Nov 4 23:55:06.075196 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 4 23:55:06.075203 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 4 23:55:06.075209 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 4 23:55:06.075216 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Nov 4 23:55:06.075223 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Nov 4 23:55:06.075234 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 4 23:55:06.075240 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 4 23:55:06.075247 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 4 23:55:06.075253 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 4 23:55:06.075259 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 4 23:55:06.075266 kernel: NX (Execute Disable) protection: active Nov 4 23:55:06.075275 kernel: APIC: Static calls initialized Nov 4 23:55:06.075282 kernel: SMBIOS 3.0.0 present. Nov 4 23:55:06.075289 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Nov 4 23:55:06.075296 kernel: DMI: Memory slots populated: 1/1 Nov 4 23:55:06.075303 kernel: Hypervisor detected: KVM Nov 4 23:55:06.075310 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Nov 4 23:55:06.075316 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 4 23:55:06.075323 kernel: kvm-clock: using sched offset of 3721322370 cycles Nov 4 23:55:06.075350 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 4 23:55:06.075367 kernel: tsc: Detected 2445.406 MHz processor Nov 4 23:55:06.075380 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 4 23:55:06.075388 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 4 23:55:06.075395 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Nov 4 23:55:06.075403 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 4 23:55:06.075410 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 4 23:55:06.075418 kernel: Using GB pages for direct mapping Nov 4 23:55:06.075427 kernel: ACPI: Early table checksum verification disabled Nov 4 23:55:06.075434 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Nov 4 23:55:06.075441 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:55:06.075448 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:55:06.075456 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:55:06.075463 kernel: ACPI: FACS 0x000000007CFE0000 000040 Nov 4 23:55:06.075484 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:55:06.075494 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:55:06.075502 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:55:06.075509 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 23:55:06.075520 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] Nov 4 23:55:06.075528 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] Nov 4 23:55:06.075535 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Nov 4 23:55:06.075544 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] Nov 4 23:55:06.075551 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] Nov 4 23:55:06.075559 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] Nov 4 23:55:06.075566 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] Nov 4 23:55:06.075573 kernel: No NUMA configuration found Nov 4 23:55:06.075581 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Nov 4 23:55:06.075590 kernel: NODE_DATA(0) allocated [mem 0x7cfd4dc0-0x7cfdbfff] Nov 4 23:55:06.075597 kernel: Zone ranges: Nov 4 23:55:06.075605 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 4 23:55:06.075613 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Nov 4 23:55:06.075620 kernel: Normal empty Nov 4 23:55:06.075627 kernel: Device empty Nov 4 23:55:06.075635 kernel: Movable zone start for each node Nov 4 23:55:06.075644 kernel: Early memory node ranges Nov 4 23:55:06.075651 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 4 23:55:06.075659 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Nov 4 23:55:06.075666 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Nov 4 23:55:06.075674 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 4 23:55:06.075681 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 4 23:55:06.075689 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 4 23:55:06.075696 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 4 23:55:06.075705 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 4 23:55:06.075712 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 4 23:55:06.075720 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 4 23:55:06.075728 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 4 23:55:06.075735 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 4 23:55:06.075743 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 4 23:55:06.075750 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 4 23:55:06.075759 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 4 23:55:06.075766 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 4 23:55:06.075774 kernel: CPU topo: Max. logical packages: 1 Nov 4 23:55:06.075781 kernel: CPU topo: Max. logical dies: 1 Nov 4 23:55:06.075789 kernel: CPU topo: Max. dies per package: 1 Nov 4 23:55:06.075796 kernel: CPU topo: Max. threads per core: 1 Nov 4 23:55:06.075803 kernel: CPU topo: Num. cores per package: 2 Nov 4 23:55:06.075810 kernel: CPU topo: Num. threads per package: 2 Nov 4 23:55:06.075818 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 4 23:55:06.075826 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 4 23:55:06.075833 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 4 23:55:06.075841 kernel: Booting paravirtualized kernel on KVM Nov 4 23:55:06.075849 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 4 23:55:06.075856 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 4 23:55:06.075864 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 4 23:55:06.075871 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 4 23:55:06.075880 kernel: pcpu-alloc: [0] 0 1 Nov 4 23:55:06.075887 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 4 23:55:06.075896 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:55:06.075905 kernel: random: crng init done Nov 4 23:55:06.075912 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 4 23:55:06.075918 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 4 23:55:06.075925 kernel: Fallback order for Node 0: 0 Nov 4 23:55:06.075931 kernel: Built 1 zonelists, mobility grouping on. Total pages: 511866 Nov 4 23:55:06.075937 kernel: Policy zone: DMA32 Nov 4 23:55:06.075943 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 4 23:55:06.075950 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 4 23:55:06.075956 kernel: ftrace: allocating 40092 entries in 157 pages Nov 4 23:55:06.075962 kernel: ftrace: allocated 157 pages with 5 groups Nov 4 23:55:06.075968 kernel: Dynamic Preempt: voluntary Nov 4 23:55:06.075975 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 4 23:55:06.075982 kernel: rcu: RCU event tracing is enabled. Nov 4 23:55:06.075988 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 4 23:55:06.075994 kernel: Trampoline variant of Tasks RCU enabled. Nov 4 23:55:06.076001 kernel: Rude variant of Tasks RCU enabled. Nov 4 23:55:06.076007 kernel: Tracing variant of Tasks RCU enabled. Nov 4 23:55:06.076012 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 4 23:55:06.076020 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 4 23:55:06.076026 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 4 23:55:06.076032 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 4 23:55:06.076038 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 4 23:55:06.076045 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 4 23:55:06.076051 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 4 23:55:06.076057 kernel: Console: colour VGA+ 80x25 Nov 4 23:55:06.076064 kernel: printk: legacy console [tty0] enabled Nov 4 23:55:06.076070 kernel: printk: legacy console [ttyS0] enabled Nov 4 23:55:06.076076 kernel: ACPI: Core revision 20240827 Nov 4 23:55:06.076287 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 4 23:55:06.076296 kernel: APIC: Switch to symmetric I/O mode setup Nov 4 23:55:06.076302 kernel: x2apic enabled Nov 4 23:55:06.076309 kernel: APIC: Switched APIC routing to: physical x2apic Nov 4 23:55:06.076315 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 4 23:55:06.076322 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fc4eb620, max_idle_ns: 440795316590 ns Nov 4 23:55:06.076328 kernel: Calibrating delay loop (skipped) preset value.. 4890.81 BogoMIPS (lpj=2445406) Nov 4 23:55:06.076348 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 4 23:55:06.076355 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 4 23:55:06.076361 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 4 23:55:06.076368 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 4 23:55:06.076375 kernel: Spectre V2 : Mitigation: Retpolines Nov 4 23:55:06.076381 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 4 23:55:06.076387 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 4 23:55:06.076394 kernel: active return thunk: retbleed_return_thunk Nov 4 23:55:06.076400 kernel: RETBleed: Mitigation: untrained return thunk Nov 4 23:55:06.076406 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 4 23:55:06.076413 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 4 23:55:06.076420 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 4 23:55:06.076427 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 4 23:55:06.076433 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 4 23:55:06.076439 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 4 23:55:06.076446 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 4 23:55:06.076452 kernel: Freeing SMP alternatives memory: 32K Nov 4 23:55:06.076459 kernel: pid_max: default: 32768 minimum: 301 Nov 4 23:55:06.076466 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 4 23:55:06.076481 kernel: landlock: Up and running. Nov 4 23:55:06.076488 kernel: SELinux: Initializing. Nov 4 23:55:06.076494 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 4 23:55:06.076500 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 4 23:55:06.076507 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 4 23:55:06.076515 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 4 23:55:06.076521 kernel: ... version: 0 Nov 4 23:55:06.076527 kernel: ... bit width: 48 Nov 4 23:55:06.076533 kernel: ... generic registers: 6 Nov 4 23:55:06.076540 kernel: ... value mask: 0000ffffffffffff Nov 4 23:55:06.076546 kernel: ... max period: 00007fffffffffff Nov 4 23:55:06.076552 kernel: ... fixed-purpose events: 0 Nov 4 23:55:06.076559 kernel: ... event mask: 000000000000003f Nov 4 23:55:06.076566 kernel: signal: max sigframe size: 1776 Nov 4 23:55:06.076573 kernel: rcu: Hierarchical SRCU implementation. Nov 4 23:55:06.076579 kernel: rcu: Max phase no-delay instances is 400. Nov 4 23:55:06.076586 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 4 23:55:06.076592 kernel: smp: Bringing up secondary CPUs ... Nov 4 23:55:06.076598 kernel: smpboot: x86: Booting SMP configuration: Nov 4 23:55:06.076605 kernel: .... node #0, CPUs: #1 Nov 4 23:55:06.076612 kernel: smp: Brought up 1 node, 2 CPUs Nov 4 23:55:06.076618 kernel: smpboot: Total of 2 processors activated (9781.62 BogoMIPS) Nov 4 23:55:06.076625 kernel: Memory: 1940308K/2047464K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15936K init, 2108K bss, 102612K reserved, 0K cma-reserved) Nov 4 23:55:06.076631 kernel: devtmpfs: initialized Nov 4 23:55:06.076638 kernel: x86/mm: Memory block size: 128MB Nov 4 23:55:06.076644 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 4 23:55:06.076651 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 4 23:55:06.076658 kernel: pinctrl core: initialized pinctrl subsystem Nov 4 23:55:06.076665 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 4 23:55:06.076671 kernel: audit: initializing netlink subsys (disabled) Nov 4 23:55:06.076678 kernel: audit: type=2000 audit(1762300503.016:1): state=initialized audit_enabled=0 res=1 Nov 4 23:55:06.076684 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 4 23:55:06.076690 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 4 23:55:06.076696 kernel: cpuidle: using governor menu Nov 4 23:55:06.076704 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 4 23:55:06.076710 kernel: dca service started, version 1.12.1 Nov 4 23:55:06.076717 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Nov 4 23:55:06.076723 kernel: PCI: Using configuration type 1 for base access Nov 4 23:55:06.076811 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 4 23:55:06.076823 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 4 23:55:06.076830 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 4 23:55:06.076840 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 4 23:55:06.076847 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 4 23:55:06.076853 kernel: ACPI: Added _OSI(Module Device) Nov 4 23:55:06.076859 kernel: ACPI: Added _OSI(Processor Device) Nov 4 23:55:06.076866 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 4 23:55:06.076872 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 4 23:55:06.076878 kernel: ACPI: Interpreter enabled Nov 4 23:55:06.076886 kernel: ACPI: PM: (supports S0 S5) Nov 4 23:55:06.076892 kernel: ACPI: Using IOAPIC for interrupt routing Nov 4 23:55:06.076900 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 4 23:55:06.076906 kernel: PCI: Using E820 reservations for host bridge windows Nov 4 23:55:06.076913 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 4 23:55:06.076919 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 4 23:55:06.077062 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 4 23:55:06.077155 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 4 23:55:06.077236 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 4 23:55:06.077246 kernel: PCI host bridge to bus 0000:00 Nov 4 23:55:06.078784 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 4 23:55:06.078920 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 4 23:55:06.079034 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 4 23:55:06.079149 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Nov 4 23:55:06.079260 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 4 23:55:06.079383 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 4 23:55:06.079459 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 4 23:55:06.079575 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 4 23:55:06.079674 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Nov 4 23:55:06.083546 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfb800000-0xfbffffff pref] Nov 4 23:55:06.083661 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfd200000-0xfd203fff 64bit pref] Nov 4 23:55:06.083781 kernel: pci 0000:00:01.0: BAR 4 [mem 0xfea10000-0xfea10fff] Nov 4 23:55:06.083867 kernel: pci 0000:00:01.0: ROM [mem 0xfea00000-0xfea0ffff pref] Nov 4 23:55:06.083950 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 4 23:55:06.084043 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 4 23:55:06.084124 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea11000-0xfea11fff] Nov 4 23:55:06.084203 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Nov 4 23:55:06.084281 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Nov 4 23:55:06.084378 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Nov 4 23:55:06.084487 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 4 23:55:06.084574 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea12000-0xfea12fff] Nov 4 23:55:06.084654 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Nov 4 23:55:06.084758 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Nov 4 23:55:06.084846 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 4 23:55:06.084933 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 4 23:55:06.085017 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea13000-0xfea13fff] Nov 4 23:55:06.085095 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Nov 4 23:55:06.085173 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Nov 4 23:55:06.085249 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 4 23:55:06.085346 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 4 23:55:06.085429 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea14000-0xfea14fff] Nov 4 23:55:06.089596 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Nov 4 23:55:06.089743 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Nov 4 23:55:06.089959 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 4 23:55:06.090126 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 4 23:55:06.090243 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea15000-0xfea15fff] Nov 4 23:55:06.090407 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Nov 4 23:55:06.090571 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Nov 4 23:55:06.090687 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 4 23:55:06.090810 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 4 23:55:06.090932 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea16000-0xfea16fff] Nov 4 23:55:06.091021 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Nov 4 23:55:06.091134 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Nov 4 23:55:06.091262 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 4 23:55:06.091416 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 4 23:55:06.091565 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea17000-0xfea17fff] Nov 4 23:55:06.091664 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Nov 4 23:55:06.091745 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Nov 4 23:55:06.091832 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 4 23:55:06.091923 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 4 23:55:06.092038 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea18000-0xfea18fff] Nov 4 23:55:06.092122 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Nov 4 23:55:06.092209 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Nov 4 23:55:06.092290 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 4 23:55:06.092401 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 4 23:55:06.092515 kernel: pci 0000:00:03.0: BAR 0 [mem 0xfea19000-0xfea19fff] Nov 4 23:55:06.092655 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Nov 4 23:55:06.092784 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 4 23:55:06.092873 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 4 23:55:06.092961 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 4 23:55:06.093051 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 4 23:55:06.093146 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 4 23:55:06.093226 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc040-0xc05f] Nov 4 23:55:06.095559 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea1a000-0xfea1afff] Nov 4 23:55:06.095658 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 4 23:55:06.095773 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Nov 4 23:55:06.095869 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Nov 4 23:55:06.095991 kernel: pci 0000:01:00.0: BAR 1 [mem 0xfe880000-0xfe880fff] Nov 4 23:55:06.096143 kernel: pci 0000:01:00.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Nov 4 23:55:06.096287 kernel: pci 0000:01:00.0: ROM [mem 0xfe800000-0xfe87ffff pref] Nov 4 23:55:06.096508 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Nov 4 23:55:06.096675 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Nov 4 23:55:06.096821 kernel: pci 0000:02:00.0: BAR 0 [mem 0xfe600000-0xfe603fff 64bit] Nov 4 23:55:06.096980 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Nov 4 23:55:06.097134 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Nov 4 23:55:06.097288 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfe400000-0xfe400fff] Nov 4 23:55:06.098546 kernel: pci 0000:03:00.0: BAR 4 [mem 0xfcc00000-0xfcc03fff 64bit pref] Nov 4 23:55:06.098682 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Nov 4 23:55:06.098808 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Nov 4 23:55:06.098895 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Nov 4 23:55:06.098976 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Nov 4 23:55:06.099063 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Nov 4 23:55:06.099149 kernel: pci 0000:05:00.0: BAR 1 [mem 0xfe000000-0xfe000fff] Nov 4 23:55:06.099229 kernel: pci 0000:05:00.0: BAR 4 [mem 0xfc800000-0xfc803fff 64bit pref] Nov 4 23:55:06.099307 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Nov 4 23:55:06.099412 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Nov 4 23:55:06.099512 kernel: pci 0000:06:00.0: BAR 1 [mem 0xfde00000-0xfde00fff] Nov 4 23:55:06.099600 kernel: pci 0000:06:00.0: BAR 4 [mem 0xfc600000-0xfc603fff 64bit pref] Nov 4 23:55:06.099681 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Nov 4 23:55:06.099691 kernel: acpiphp: Slot [0] registered Nov 4 23:55:06.099806 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Nov 4 23:55:06.099891 kernel: pci 0000:07:00.0: BAR 1 [mem 0xfdc80000-0xfdc80fff] Nov 4 23:55:06.099973 kernel: pci 0000:07:00.0: BAR 4 [mem 0xfc400000-0xfc403fff 64bit pref] Nov 4 23:55:06.100056 kernel: pci 0000:07:00.0: ROM [mem 0xfdc00000-0xfdc7ffff pref] Nov 4 23:55:06.100135 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Nov 4 23:55:06.100146 kernel: acpiphp: Slot [0-2] registered Nov 4 23:55:06.100222 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Nov 4 23:55:06.100232 kernel: acpiphp: Slot [0-3] registered Nov 4 23:55:06.100308 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Nov 4 23:55:06.100320 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 4 23:55:06.100327 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 4 23:55:06.100348 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 4 23:55:06.100354 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 4 23:55:06.100361 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 4 23:55:06.100368 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 4 23:55:06.100374 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 4 23:55:06.100382 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 4 23:55:06.100389 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 4 23:55:06.100395 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 4 23:55:06.100402 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 4 23:55:06.100409 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 4 23:55:06.100415 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 4 23:55:06.100422 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 4 23:55:06.100429 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 4 23:55:06.100436 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 4 23:55:06.100442 kernel: iommu: Default domain type: Translated Nov 4 23:55:06.100449 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 4 23:55:06.100545 kernel: PCI: Using ACPI for IRQ routing Nov 4 23:55:06.100555 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 4 23:55:06.100562 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 4 23:55:06.100569 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Nov 4 23:55:06.100667 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 4 23:55:06.100784 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 4 23:55:06.100910 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 4 23:55:06.100925 kernel: vgaarb: loaded Nov 4 23:55:06.100937 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 4 23:55:06.100948 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 4 23:55:06.100959 kernel: clocksource: Switched to clocksource kvm-clock Nov 4 23:55:06.100974 kernel: VFS: Disk quotas dquot_6.6.0 Nov 4 23:55:06.100985 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 4 23:55:06.100996 kernel: pnp: PnP ACPI init Nov 4 23:55:06.101152 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 4 23:55:06.101167 kernel: pnp: PnP ACPI: found 5 devices Nov 4 23:55:06.101174 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 4 23:55:06.101184 kernel: NET: Registered PF_INET protocol family Nov 4 23:55:06.101191 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 4 23:55:06.101198 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 4 23:55:06.101204 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 4 23:55:06.101211 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 4 23:55:06.101217 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 4 23:55:06.101224 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 4 23:55:06.101232 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 4 23:55:06.101239 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 4 23:55:06.101245 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 4 23:55:06.101252 kernel: NET: Registered PF_XDP protocol family Nov 4 23:55:06.101353 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 4 23:55:06.101460 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 4 23:55:06.101582 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 4 23:55:06.101670 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff]: assigned Nov 4 23:55:06.101784 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff]: assigned Nov 4 23:55:06.101884 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff]: assigned Nov 4 23:55:06.101967 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Nov 4 23:55:06.102047 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Nov 4 23:55:06.102125 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Nov 4 23:55:06.102204 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Nov 4 23:55:06.102285 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Nov 4 23:55:06.102387 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 4 23:55:06.102468 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Nov 4 23:55:06.102627 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Nov 4 23:55:06.102710 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 4 23:55:06.102812 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Nov 4 23:55:06.102893 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Nov 4 23:55:06.102976 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 4 23:55:06.103054 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Nov 4 23:55:06.103131 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Nov 4 23:55:06.103207 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 4 23:55:06.103282 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Nov 4 23:55:06.103378 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Nov 4 23:55:06.103457 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 4 23:55:06.103557 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Nov 4 23:55:06.103636 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Nov 4 23:55:06.103720 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Nov 4 23:55:06.103820 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 4 23:55:06.103899 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Nov 4 23:55:06.103975 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Nov 4 23:55:06.104056 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Nov 4 23:55:06.104133 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 4 23:55:06.104210 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Nov 4 23:55:06.104287 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Nov 4 23:55:06.104386 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 4 23:55:06.104491 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 4 23:55:06.104577 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 4 23:55:06.104650 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 4 23:55:06.104734 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 4 23:55:06.104827 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Nov 4 23:55:06.104899 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 4 23:55:06.104974 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 4 23:55:06.105057 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Nov 4 23:55:06.105132 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Nov 4 23:55:06.105213 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Nov 4 23:55:06.105287 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Nov 4 23:55:06.105389 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Nov 4 23:55:06.105466 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 4 23:55:06.105588 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Nov 4 23:55:06.105663 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 4 23:55:06.105769 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Nov 4 23:55:06.105853 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 4 23:55:06.105937 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Nov 4 23:55:06.106013 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 4 23:55:06.106091 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Nov 4 23:55:06.106164 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Nov 4 23:55:06.106235 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 4 23:55:06.106317 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Nov 4 23:55:06.106409 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Nov 4 23:55:06.106497 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 4 23:55:06.106578 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Nov 4 23:55:06.106650 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Nov 4 23:55:06.106744 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 4 23:55:06.106761 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 4 23:55:06.106769 kernel: PCI: CLS 0 bytes, default 64 Nov 4 23:55:06.106776 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fc4eb620, max_idle_ns: 440795316590 ns Nov 4 23:55:06.106783 kernel: Initialise system trusted keyrings Nov 4 23:55:06.106790 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 4 23:55:06.106797 kernel: Key type asymmetric registered Nov 4 23:55:06.106807 kernel: Asymmetric key parser 'x509' registered Nov 4 23:55:06.106814 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 4 23:55:06.106821 kernel: io scheduler mq-deadline registered Nov 4 23:55:06.106828 kernel: io scheduler kyber registered Nov 4 23:55:06.106834 kernel: io scheduler bfq registered Nov 4 23:55:06.106919 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Nov 4 23:55:06.106997 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Nov 4 23:55:06.107078 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Nov 4 23:55:06.107156 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Nov 4 23:55:06.107232 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Nov 4 23:55:06.107309 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Nov 4 23:55:06.107408 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Nov 4 23:55:06.107502 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Nov 4 23:55:06.107582 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Nov 4 23:55:06.107662 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Nov 4 23:55:06.107761 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Nov 4 23:55:06.107843 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Nov 4 23:55:06.107920 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Nov 4 23:55:06.107998 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Nov 4 23:55:06.108075 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Nov 4 23:55:06.108155 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Nov 4 23:55:06.108167 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 4 23:55:06.108242 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Nov 4 23:55:06.108324 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Nov 4 23:55:06.108353 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 4 23:55:06.108366 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Nov 4 23:55:06.108374 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 4 23:55:06.108381 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 4 23:55:06.108388 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 4 23:55:06.108395 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 4 23:55:06.108402 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 4 23:55:06.108409 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 4 23:55:06.108521 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 4 23:55:06.108601 kernel: rtc_cmos 00:03: registered as rtc0 Nov 4 23:55:06.108674 kernel: rtc_cmos 00:03: setting system clock to 2025-11-04T23:55:04 UTC (1762300504) Nov 4 23:55:06.108783 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 4 23:55:06.108797 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 4 23:55:06.108805 kernel: NET: Registered PF_INET6 protocol family Nov 4 23:55:06.108816 kernel: Segment Routing with IPv6 Nov 4 23:55:06.108823 kernel: In-situ OAM (IOAM) with IPv6 Nov 4 23:55:06.108830 kernel: NET: Registered PF_PACKET protocol family Nov 4 23:55:06.108837 kernel: Key type dns_resolver registered Nov 4 23:55:06.108844 kernel: IPI shorthand broadcast: enabled Nov 4 23:55:06.108851 kernel: sched_clock: Marking stable (1474012438, 146124584)->(1638323404, -18186382) Nov 4 23:55:06.108857 kernel: registered taskstats version 1 Nov 4 23:55:06.108865 kernel: Loading compiled-in X.509 certificates Nov 4 23:55:06.108872 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: ace064fb6689a15889f35c6439909c760a72ef44' Nov 4 23:55:06.108879 kernel: Demotion targets for Node 0: null Nov 4 23:55:06.108886 kernel: Key type .fscrypt registered Nov 4 23:55:06.108892 kernel: Key type fscrypt-provisioning registered Nov 4 23:55:06.108899 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 4 23:55:06.108906 kernel: ima: Allocated hash algorithm: sha1 Nov 4 23:55:06.108914 kernel: ima: No architecture policies found Nov 4 23:55:06.108921 kernel: clk: Disabling unused clocks Nov 4 23:55:06.108928 kernel: Freeing unused kernel image (initmem) memory: 15936K Nov 4 23:55:06.108936 kernel: Write protecting the kernel read-only data: 40960k Nov 4 23:55:06.108943 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 4 23:55:06.108950 kernel: Run /init as init process Nov 4 23:55:06.108956 kernel: with arguments: Nov 4 23:55:06.108964 kernel: /init Nov 4 23:55:06.108971 kernel: with environment: Nov 4 23:55:06.108978 kernel: HOME=/ Nov 4 23:55:06.108984 kernel: TERM=linux Nov 4 23:55:06.108992 kernel: ACPI: bus type USB registered Nov 4 23:55:06.108998 kernel: usbcore: registered new interface driver usbfs Nov 4 23:55:06.109010 kernel: usbcore: registered new interface driver hub Nov 4 23:55:06.109028 kernel: usbcore: registered new device driver usb Nov 4 23:55:06.109157 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Nov 4 23:55:06.109245 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Nov 4 23:55:06.109327 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Nov 4 23:55:06.109426 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Nov 4 23:55:06.109850 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Nov 4 23:55:06.109943 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Nov 4 23:55:06.110052 kernel: hub 1-0:1.0: USB hub found Nov 4 23:55:06.110140 kernel: hub 1-0:1.0: 4 ports detected Nov 4 23:55:06.110237 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Nov 4 23:55:06.110355 kernel: hub 2-0:1.0: USB hub found Nov 4 23:55:06.110454 kernel: hub 2-0:1.0: 4 ports detected Nov 4 23:55:06.110467 kernel: SCSI subsystem initialized Nov 4 23:55:06.110493 kernel: libata version 3.00 loaded. Nov 4 23:55:06.110582 kernel: ahci 0000:00:1f.2: version 3.0 Nov 4 23:55:06.110593 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 4 23:55:06.110704 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 4 23:55:06.110815 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 4 23:55:06.110901 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 4 23:55:06.113543 kernel: scsi host0: ahci Nov 4 23:55:06.113649 kernel: scsi host1: ahci Nov 4 23:55:06.113760 kernel: scsi host2: ahci Nov 4 23:55:06.113854 kernel: scsi host3: ahci Nov 4 23:55:06.113941 kernel: scsi host4: ahci Nov 4 23:55:06.114033 kernel: scsi host5: ahci Nov 4 23:55:06.114044 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 38 lpm-pol 1 Nov 4 23:55:06.114052 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 38 lpm-pol 1 Nov 4 23:55:06.114059 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 38 lpm-pol 1 Nov 4 23:55:06.114066 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 38 lpm-pol 1 Nov 4 23:55:06.114073 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 38 lpm-pol 1 Nov 4 23:55:06.114082 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 38 lpm-pol 1 Nov 4 23:55:06.114185 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Nov 4 23:55:06.114196 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 4 23:55:06.114203 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 4 23:55:06.114210 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 4 23:55:06.114217 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 4 23:55:06.114226 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 4 23:55:06.114233 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 4 23:55:06.114240 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 4 23:55:06.114246 kernel: ata1.00: LPM support broken, forcing max_power Nov 4 23:55:06.114254 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 4 23:55:06.114260 kernel: ata1.00: applying bridge limits Nov 4 23:55:06.114267 kernel: ata1.00: LPM support broken, forcing max_power Nov 4 23:55:06.114275 kernel: ata1.00: configured for UDMA/100 Nov 4 23:55:06.114396 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 4 23:55:06.114408 kernel: usbcore: registered new interface driver usbhid Nov 4 23:55:06.114416 kernel: usbhid: USB HID core driver Nov 4 23:55:06.114940 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Nov 4 23:55:06.115051 kernel: scsi host6: Virtio SCSI HBA Nov 4 23:55:06.115156 kernel: scsi 6:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 4 23:55:06.115245 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 4 23:55:06.115256 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 4 23:55:06.115358 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Nov 4 23:55:06.115370 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Nov 4 23:55:06.116629 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Nov 4 23:55:06.116774 kernel: sd 6:0:0:0: Power-on or device reset occurred Nov 4 23:55:06.116872 kernel: sd 6:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Nov 4 23:55:06.116960 kernel: sd 6:0:0:0: [sda] Write Protect is off Nov 4 23:55:06.117047 kernel: sd 6:0:0:0: [sda] Mode Sense: 63 00 00 08 Nov 4 23:55:06.117134 kernel: sd 6:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 4 23:55:06.117147 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 4 23:55:06.117154 kernel: GPT:25804799 != 80003071 Nov 4 23:55:06.117161 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 4 23:55:06.117168 kernel: GPT:25804799 != 80003071 Nov 4 23:55:06.117175 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 4 23:55:06.117182 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 4 23:55:06.117268 kernel: sd 6:0:0:0: [sda] Attached SCSI disk Nov 4 23:55:06.117280 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 4 23:55:06.117288 kernel: device-mapper: uevent: version 1.0.3 Nov 4 23:55:06.117295 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 4 23:55:06.117302 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 4 23:55:06.117309 kernel: raid6: avx2x4 gen() 32081 MB/s Nov 4 23:55:06.117316 kernel: raid6: avx2x2 gen() 30621 MB/s Nov 4 23:55:06.117323 kernel: raid6: avx2x1 gen() 21375 MB/s Nov 4 23:55:06.117347 kernel: raid6: using algorithm avx2x4 gen() 32081 MB/s Nov 4 23:55:06.117354 kernel: raid6: .... xor() 4627 MB/s, rmw enabled Nov 4 23:55:06.117361 kernel: raid6: using avx2x2 recovery algorithm Nov 4 23:55:06.117368 kernel: xor: automatically using best checksumming function avx Nov 4 23:55:06.117375 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 4 23:55:06.117382 kernel: BTRFS: device fsid f719dc90-1cf7-4f08-a80f-0dda441372cc devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (183) Nov 4 23:55:06.117389 kernel: BTRFS info (device dm-0): first mount of filesystem f719dc90-1cf7-4f08-a80f-0dda441372cc Nov 4 23:55:06.117398 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:55:06.117405 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 4 23:55:06.117412 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 4 23:55:06.117419 kernel: BTRFS info (device dm-0): enabling free space tree Nov 4 23:55:06.117426 kernel: loop: module loaded Nov 4 23:55:06.117432 kernel: loop0: detected capacity change from 0 to 100120 Nov 4 23:55:06.117440 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 4 23:55:06.117448 systemd[1]: Successfully made /usr/ read-only. Nov 4 23:55:06.117459 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 23:55:06.117466 systemd[1]: Detected virtualization kvm. Nov 4 23:55:06.117501 systemd[1]: Detected architecture x86-64. Nov 4 23:55:06.117509 systemd[1]: Running in initrd. Nov 4 23:55:06.117516 systemd[1]: No hostname configured, using default hostname. Nov 4 23:55:06.117525 systemd[1]: Hostname set to . Nov 4 23:55:06.117533 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 4 23:55:06.117540 systemd[1]: Queued start job for default target initrd.target. Nov 4 23:55:06.117547 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 23:55:06.117555 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:55:06.117562 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:55:06.117570 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 4 23:55:06.117580 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 23:55:06.117588 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 4 23:55:06.117595 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 4 23:55:06.117603 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:55:06.117610 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:55:06.117619 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 4 23:55:06.117626 systemd[1]: Reached target paths.target - Path Units. Nov 4 23:55:06.117633 systemd[1]: Reached target slices.target - Slice Units. Nov 4 23:55:06.117641 systemd[1]: Reached target swap.target - Swaps. Nov 4 23:55:06.117648 systemd[1]: Reached target timers.target - Timer Units. Nov 4 23:55:06.117656 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 23:55:06.117663 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 23:55:06.117671 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 4 23:55:06.117678 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 4 23:55:06.117686 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:55:06.117693 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 23:55:06.117701 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:55:06.117712 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 23:55:06.117727 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 4 23:55:06.117741 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 4 23:55:06.117749 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 23:55:06.117757 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 4 23:55:06.117765 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 4 23:55:06.117772 systemd[1]: Starting systemd-fsck-usr.service... Nov 4 23:55:06.117780 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 23:55:06.117787 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 23:55:06.117796 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:55:06.117804 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 4 23:55:06.117811 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:55:06.117820 systemd[1]: Finished systemd-fsck-usr.service. Nov 4 23:55:06.117827 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 23:55:06.117857 systemd-journald[319]: Collecting audit messages is disabled. Nov 4 23:55:06.117878 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 4 23:55:06.117887 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 23:55:06.117895 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 23:55:06.117902 kernel: Bridge firewalling registered Nov 4 23:55:06.117909 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 23:55:06.117917 systemd-journald[319]: Journal started Nov 4 23:55:06.117935 systemd-journald[319]: Runtime Journal (/run/log/journal/f65a3935a81b4bf4af434eb8fd6de0ed) is 4.7M, max 38.3M, 33.5M free. Nov 4 23:55:06.101469 systemd-modules-load[320]: Inserted module 'br_netfilter' Nov 4 23:55:06.147292 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 23:55:06.148187 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:55:06.149373 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:55:06.152793 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 4 23:55:06.156561 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 23:55:06.175598 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 23:55:06.186630 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:55:06.188375 systemd-tmpfiles[343]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 4 23:55:06.191585 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 23:55:06.199621 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 23:55:06.201165 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:55:06.204622 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 4 23:55:06.228166 dracut-cmdline[361]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c57c40de146020da5f35a7230cc1da8f1a5a7a7af49d0754317609f7e94976e2 Nov 4 23:55:06.237374 systemd-resolved[356]: Positive Trust Anchors: Nov 4 23:55:06.238068 systemd-resolved[356]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 23:55:06.238075 systemd-resolved[356]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 23:55:06.238102 systemd-resolved[356]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 23:55:06.261677 systemd-resolved[356]: Defaulting to hostname 'linux'. Nov 4 23:55:06.262498 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 23:55:06.263032 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:55:06.306514 kernel: Loading iSCSI transport class v2.0-870. Nov 4 23:55:06.320514 kernel: iscsi: registered transport (tcp) Nov 4 23:55:06.340522 kernel: iscsi: registered transport (qla4xxx) Nov 4 23:55:06.340616 kernel: QLogic iSCSI HBA Driver Nov 4 23:55:06.364005 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 23:55:06.381823 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:55:06.384579 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 23:55:06.429397 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 4 23:55:06.432835 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 4 23:55:06.435629 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 4 23:55:06.470190 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 4 23:55:06.472890 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:55:06.496879 systemd-udevd[604]: Using default interface naming scheme 'v257'. Nov 4 23:55:06.506946 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:55:06.510246 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 4 23:55:06.525229 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 23:55:06.527260 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 23:55:06.538198 dracut-pre-trigger[686]: rd.md=0: removing MD RAID activation Nov 4 23:55:06.561799 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 23:55:06.564905 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 23:55:06.574650 systemd-networkd[699]: lo: Link UP Nov 4 23:55:06.574658 systemd-networkd[699]: lo: Gained carrier Nov 4 23:55:06.575067 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 23:55:06.576142 systemd[1]: Reached target network.target - Network. Nov 4 23:55:06.633029 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:55:06.637945 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 4 23:55:06.724041 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 4 23:55:06.733019 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 4 23:55:06.759755 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 4 23:55:06.776021 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 4 23:55:06.783582 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 4 23:55:06.794507 kernel: cryptd: max_cpu_qlen set to 1000 Nov 4 23:55:06.800548 disk-uuid[771]: Primary Header is updated. Nov 4 23:55:06.800548 disk-uuid[771]: Secondary Entries is updated. Nov 4 23:55:06.800548 disk-uuid[771]: Secondary Header is updated. Nov 4 23:55:06.840521 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 4 23:55:06.840587 kernel: AES CTR mode by8 optimization enabled Nov 4 23:55:06.842640 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:55:06.843512 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:55:06.845277 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:55:06.851269 systemd-networkd[699]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:55:06.851276 systemd-networkd[699]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 23:55:06.851607 systemd-networkd[699]: eth0: Link UP Nov 4 23:55:06.859064 systemd-networkd[699]: eth0: Gained carrier Nov 4 23:55:06.859342 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:55:06.877347 systemd-networkd[699]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:55:06.882570 systemd-networkd[699]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:55:06.882573 systemd-networkd[699]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 23:55:06.882860 systemd-networkd[699]: eth1: Link UP Nov 4 23:55:06.887533 systemd-networkd[699]: eth1: Gained carrier Nov 4 23:55:06.887544 systemd-networkd[699]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:55:06.909521 systemd-networkd[699]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Nov 4 23:55:06.927574 systemd-networkd[699]: eth0: DHCPv4 address 46.62.221.150/32, gateway 172.31.1.1 acquired from 172.31.1.1 Nov 4 23:55:06.966296 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 4 23:55:06.978975 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:55:06.980457 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 23:55:06.980988 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:55:06.982060 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 23:55:06.983855 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 4 23:55:07.005552 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 4 23:55:07.891088 disk-uuid[774]: Warning: The kernel is still using the old partition table. Nov 4 23:55:07.891088 disk-uuid[774]: The new table will be used at the next reboot or after you Nov 4 23:55:07.891088 disk-uuid[774]: run partprobe(8) or kpartx(8) Nov 4 23:55:07.891088 disk-uuid[774]: The operation has completed successfully. Nov 4 23:55:07.899102 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 4 23:55:07.899246 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 4 23:55:07.902294 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 4 23:55:07.934542 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (863) Nov 4 23:55:07.934614 kernel: BTRFS info (device sda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:55:07.937503 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:55:07.943297 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 4 23:55:07.943363 kernel: BTRFS info (device sda6): turning on async discard Nov 4 23:55:07.943384 kernel: BTRFS info (device sda6): enabling free space tree Nov 4 23:55:07.954515 kernel: BTRFS info (device sda6): last unmount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:55:07.955247 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 4 23:55:07.957646 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 4 23:55:08.119646 ignition[882]: Ignition 2.22.0 Nov 4 23:55:08.119662 ignition[882]: Stage: fetch-offline Nov 4 23:55:08.121362 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 23:55:08.119698 ignition[882]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:55:08.125640 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 4 23:55:08.119707 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 4 23:55:08.119799 ignition[882]: parsed url from cmdline: "" Nov 4 23:55:08.119803 ignition[882]: no config URL provided Nov 4 23:55:08.119808 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Nov 4 23:55:08.119817 ignition[882]: no config at "/usr/lib/ignition/user.ign" Nov 4 23:55:08.119821 ignition[882]: failed to fetch config: resource requires networking Nov 4 23:55:08.119983 ignition[882]: Ignition finished successfully Nov 4 23:55:08.149661 ignition[888]: Ignition 2.22.0 Nov 4 23:55:08.149679 ignition[888]: Stage: fetch Nov 4 23:55:08.149880 ignition[888]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:55:08.149892 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 4 23:55:08.150009 ignition[888]: parsed url from cmdline: "" Nov 4 23:55:08.150013 ignition[888]: no config URL provided Nov 4 23:55:08.150020 ignition[888]: reading system config file "/usr/lib/ignition/user.ign" Nov 4 23:55:08.150028 ignition[888]: no config at "/usr/lib/ignition/user.ign" Nov 4 23:55:08.150067 ignition[888]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Nov 4 23:55:08.155433 ignition[888]: GET result: OK Nov 4 23:55:08.155741 ignition[888]: parsing config with SHA512: edb0954a908cb7c2dd1df78822c794204dad7b7adcef402ad2d9b9db633f3140e2dbd54338b3ec8f2f8c34d07afa3372daf431737ae8af39e7b0cda51ef59fc0 Nov 4 23:55:08.165187 unknown[888]: fetched base config from "system" Nov 4 23:55:08.165216 unknown[888]: fetched base config from "system" Nov 4 23:55:08.165229 unknown[888]: fetched user config from "hetzner" Nov 4 23:55:08.166722 ignition[888]: fetch: fetch complete Nov 4 23:55:08.171711 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 4 23:55:08.166735 ignition[888]: fetch: fetch passed Nov 4 23:55:08.166847 ignition[888]: Ignition finished successfully Nov 4 23:55:08.175619 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 4 23:55:08.206526 ignition[894]: Ignition 2.22.0 Nov 4 23:55:08.206544 ignition[894]: Stage: kargs Nov 4 23:55:08.206716 ignition[894]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:55:08.206727 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 4 23:55:08.207808 ignition[894]: kargs: kargs passed Nov 4 23:55:08.209952 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 4 23:55:08.207859 ignition[894]: Ignition finished successfully Nov 4 23:55:08.213684 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 4 23:55:08.239401 ignition[901]: Ignition 2.22.0 Nov 4 23:55:08.239421 ignition[901]: Stage: disks Nov 4 23:55:08.239653 ignition[901]: no configs at "/usr/lib/ignition/base.d" Nov 4 23:55:08.239664 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 4 23:55:08.240663 ignition[901]: disks: disks passed Nov 4 23:55:08.242426 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 4 23:55:08.240707 ignition[901]: Ignition finished successfully Nov 4 23:55:08.243563 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 4 23:55:08.244753 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 4 23:55:08.246788 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 23:55:08.248224 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 23:55:08.249581 systemd[1]: Reached target basic.target - Basic System. Nov 4 23:55:08.252280 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 4 23:55:08.296336 systemd-fsck[909]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Nov 4 23:55:08.298879 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 4 23:55:08.302117 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 4 23:55:08.436530 kernel: EXT4-fs (sda9): mounted filesystem cfb29ed0-6faf-41a8-b421-3abc514e4975 r/w with ordered data mode. Quota mode: none. Nov 4 23:55:08.436785 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 4 23:55:08.437847 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 4 23:55:08.441064 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 23:55:08.444563 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 4 23:55:08.446756 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 4 23:55:08.449158 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 4 23:55:08.449196 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 23:55:08.461818 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 4 23:55:08.467609 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 4 23:55:08.473924 systemd-networkd[699]: eth1: Gained IPv6LL Nov 4 23:55:08.479296 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (917) Nov 4 23:55:08.479376 kernel: BTRFS info (device sda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:55:08.481848 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:55:08.501607 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 4 23:55:08.501676 kernel: BTRFS info (device sda6): turning on async discard Nov 4 23:55:08.501686 kernel: BTRFS info (device sda6): enabling free space tree Nov 4 23:55:08.505914 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 23:55:08.539079 systemd-networkd[699]: eth0: Gained IPv6LL Nov 4 23:55:08.555835 coreos-metadata[919]: Nov 04 23:55:08.555 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Nov 4 23:55:08.557668 coreos-metadata[919]: Nov 04 23:55:08.557 INFO Fetch successful Nov 4 23:55:08.559093 coreos-metadata[919]: Nov 04 23:55:08.559 INFO wrote hostname ci-4487-0-0-n-1c2c5ddea4 to /sysroot/etc/hostname Nov 4 23:55:08.561876 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 4 23:55:08.581343 initrd-setup-root[945]: cut: /sysroot/etc/passwd: No such file or directory Nov 4 23:55:08.587232 initrd-setup-root[952]: cut: /sysroot/etc/group: No such file or directory Nov 4 23:55:08.592624 initrd-setup-root[959]: cut: /sysroot/etc/shadow: No such file or directory Nov 4 23:55:08.597905 initrd-setup-root[966]: cut: /sysroot/etc/gshadow: No such file or directory Nov 4 23:55:08.725545 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 4 23:55:08.728235 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 4 23:55:08.730099 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 4 23:55:08.750162 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 4 23:55:08.753977 kernel: BTRFS info (device sda6): last unmount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:55:08.770928 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 4 23:55:08.790283 ignition[1035]: INFO : Ignition 2.22.0 Nov 4 23:55:08.792496 ignition[1035]: INFO : Stage: mount Nov 4 23:55:08.792496 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:55:08.792496 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 4 23:55:08.794516 ignition[1035]: INFO : mount: mount passed Nov 4 23:55:08.794516 ignition[1035]: INFO : Ignition finished successfully Nov 4 23:55:08.796122 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 4 23:55:08.797678 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 4 23:55:09.441715 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 23:55:09.472528 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1046) Nov 4 23:55:09.477586 kernel: BTRFS info (device sda6): first mount of filesystem c1921af5-b472-4b94-8690-4d6daf91a8cd Nov 4 23:55:09.477635 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 4 23:55:09.491025 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 4 23:55:09.491075 kernel: BTRFS info (device sda6): turning on async discard Nov 4 23:55:09.491097 kernel: BTRFS info (device sda6): enabling free space tree Nov 4 23:55:09.497034 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 23:55:09.548611 ignition[1063]: INFO : Ignition 2.22.0 Nov 4 23:55:09.548611 ignition[1063]: INFO : Stage: files Nov 4 23:55:09.550272 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:55:09.550272 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 4 23:55:09.550272 ignition[1063]: DEBUG : files: compiled without relabeling support, skipping Nov 4 23:55:09.553244 ignition[1063]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 4 23:55:09.553244 ignition[1063]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 4 23:55:09.557812 ignition[1063]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 4 23:55:09.558894 ignition[1063]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 4 23:55:09.558894 ignition[1063]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 4 23:55:09.558455 unknown[1063]: wrote ssh authorized keys file for user: core Nov 4 23:55:09.562454 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 4 23:55:09.562454 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 4 23:55:09.800609 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 4 23:55:10.103171 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 4 23:55:10.103171 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 4 23:55:10.105692 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 4 23:55:10.105692 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 4 23:55:10.105692 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 4 23:55:10.105692 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 23:55:10.105692 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 23:55:10.105692 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 23:55:10.105692 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 23:55:10.116364 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 23:55:10.116364 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 23:55:10.116364 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 23:55:10.116364 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 23:55:10.116364 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 23:55:10.116364 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 4 23:55:10.457183 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 4 23:55:10.735402 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 4 23:55:10.735402 ignition[1063]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 4 23:55:10.739216 ignition[1063]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 23:55:10.739216 ignition[1063]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 23:55:10.739216 ignition[1063]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 4 23:55:10.739216 ignition[1063]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 4 23:55:10.739216 ignition[1063]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 4 23:55:10.747949 ignition[1063]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 4 23:55:10.747949 ignition[1063]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 4 23:55:10.747949 ignition[1063]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Nov 4 23:55:10.747949 ignition[1063]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Nov 4 23:55:10.747949 ignition[1063]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 4 23:55:10.747949 ignition[1063]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 4 23:55:10.747949 ignition[1063]: INFO : files: files passed Nov 4 23:55:10.747949 ignition[1063]: INFO : Ignition finished successfully Nov 4 23:55:10.742372 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 4 23:55:10.747110 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 4 23:55:10.753670 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 4 23:55:10.770379 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 4 23:55:10.771321 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 4 23:55:10.786422 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:55:10.786422 initrd-setup-root-after-ignition[1094]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:55:10.789391 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 23:55:10.789207 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 23:55:10.791252 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 4 23:55:10.794713 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 4 23:55:10.851141 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 4 23:55:10.851250 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 4 23:55:10.852787 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 4 23:55:10.853793 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 4 23:55:10.855225 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 4 23:55:10.856048 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 4 23:55:10.875298 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 23:55:10.877106 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 4 23:55:10.904467 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 23:55:10.905772 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:55:10.907304 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:55:10.909665 systemd[1]: Stopped target timers.target - Timer Units. Nov 4 23:55:10.911788 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 4 23:55:10.912020 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 23:55:10.914418 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 4 23:55:10.915635 systemd[1]: Stopped target basic.target - Basic System. Nov 4 23:55:10.917892 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 4 23:55:10.919802 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 23:55:10.921875 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 4 23:55:10.923959 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 4 23:55:10.926104 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 4 23:55:10.928052 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 23:55:10.930342 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 4 23:55:10.932400 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 4 23:55:10.934765 systemd[1]: Stopped target swap.target - Swaps. Nov 4 23:55:10.937132 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 4 23:55:10.937442 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 4 23:55:10.939800 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:55:10.941368 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:55:10.943356 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 4 23:55:10.943989 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:55:10.945722 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 4 23:55:10.945960 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 4 23:55:10.949135 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 4 23:55:10.949378 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 23:55:10.950828 systemd[1]: ignition-files.service: Deactivated successfully. Nov 4 23:55:10.951069 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 4 23:55:10.952746 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 4 23:55:10.952914 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 4 23:55:10.957913 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 4 23:55:10.959903 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 4 23:55:10.961669 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:55:10.966598 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 4 23:55:10.983563 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 4 23:55:10.983783 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:55:10.987157 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 4 23:55:10.987348 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:55:10.992262 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 4 23:55:10.994111 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 23:55:11.006732 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 4 23:55:11.007621 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 4 23:55:11.014269 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 4 23:55:11.018561 ignition[1118]: INFO : Ignition 2.22.0 Nov 4 23:55:11.021612 ignition[1118]: INFO : Stage: umount Nov 4 23:55:11.021612 ignition[1118]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 23:55:11.021612 ignition[1118]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 4 23:55:11.033101 ignition[1118]: INFO : umount: umount passed Nov 4 23:55:11.033101 ignition[1118]: INFO : Ignition finished successfully Nov 4 23:55:11.027082 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 4 23:55:11.027279 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 4 23:55:11.032055 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 4 23:55:11.032167 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 4 23:55:11.042645 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 4 23:55:11.042719 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 4 23:55:11.044603 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 4 23:55:11.044671 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 4 23:55:11.051522 systemd[1]: Stopped target network.target - Network. Nov 4 23:55:11.053635 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 4 23:55:11.053722 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 23:55:11.055019 systemd[1]: Stopped target paths.target - Path Units. Nov 4 23:55:11.058422 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 4 23:55:11.062545 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:55:11.063773 systemd[1]: Stopped target slices.target - Slice Units. Nov 4 23:55:11.065734 systemd[1]: Stopped target sockets.target - Socket Units. Nov 4 23:55:11.067962 systemd[1]: iscsid.socket: Deactivated successfully. Nov 4 23:55:11.068019 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 23:55:11.070103 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 4 23:55:11.070157 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 23:55:11.072045 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 4 23:55:11.072120 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 4 23:55:11.074043 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 4 23:55:11.074106 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 4 23:55:11.076554 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 4 23:55:11.078580 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 4 23:55:11.081638 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 4 23:55:11.081787 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 4 23:55:11.085241 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 4 23:55:11.085435 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 4 23:55:11.091797 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 4 23:55:11.091971 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 4 23:55:11.097781 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 4 23:55:11.097993 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 4 23:55:11.106188 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 4 23:55:11.108981 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 4 23:55:11.109074 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:55:11.112038 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 4 23:55:11.115376 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 4 23:55:11.116549 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 23:55:11.120147 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 4 23:55:11.120224 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:55:11.122140 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 4 23:55:11.122211 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 4 23:55:11.124912 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:55:11.143935 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 4 23:55:11.144142 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:55:11.147108 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 4 23:55:11.147197 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 4 23:55:11.150967 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 4 23:55:11.151021 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:55:11.154206 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 4 23:55:11.154311 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 4 23:55:11.160189 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 4 23:55:11.160271 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 4 23:55:11.163104 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 4 23:55:11.163175 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 23:55:11.167847 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 4 23:55:11.169847 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 4 23:55:11.169937 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:55:11.174623 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 4 23:55:11.174707 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:55:11.176645 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:55:11.176719 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:55:11.189993 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 4 23:55:11.190129 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 4 23:55:11.205421 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 4 23:55:11.205676 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 4 23:55:11.208048 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 4 23:55:11.211671 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 4 23:55:11.234689 systemd[1]: Switching root. Nov 4 23:55:11.281882 systemd-journald[319]: Journal stopped Nov 4 23:55:12.237524 systemd-journald[319]: Received SIGTERM from PID 1 (systemd). Nov 4 23:55:12.237570 kernel: SELinux: policy capability network_peer_controls=1 Nov 4 23:55:12.237583 kernel: SELinux: policy capability open_perms=1 Nov 4 23:55:12.237592 kernel: SELinux: policy capability extended_socket_class=1 Nov 4 23:55:12.237603 kernel: SELinux: policy capability always_check_network=0 Nov 4 23:55:12.237611 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 4 23:55:12.237628 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 4 23:55:12.237649 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 4 23:55:12.237665 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 4 23:55:12.237679 kernel: SELinux: policy capability userspace_initial_context=0 Nov 4 23:55:12.237695 kernel: audit: type=1403 audit(1762300511.458:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 4 23:55:12.237724 systemd[1]: Successfully loaded SELinux policy in 92.277ms. Nov 4 23:55:12.237746 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.580ms. Nov 4 23:55:12.237763 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 23:55:12.237782 systemd[1]: Detected virtualization kvm. Nov 4 23:55:12.237800 systemd[1]: Detected architecture x86-64. Nov 4 23:55:12.237943 systemd[1]: Detected first boot. Nov 4 23:55:12.237970 systemd[1]: Hostname set to . Nov 4 23:55:12.237988 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 4 23:55:12.238004 zram_generator::config[1161]: No configuration found. Nov 4 23:55:12.238022 kernel: Guest personality initialized and is inactive Nov 4 23:55:12.238037 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 4 23:55:12.238052 kernel: Initialized host personality Nov 4 23:55:12.238066 kernel: NET: Registered PF_VSOCK protocol family Nov 4 23:55:12.238084 systemd[1]: Populated /etc with preset unit settings. Nov 4 23:55:12.238094 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 4 23:55:12.238102 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 4 23:55:12.238112 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 4 23:55:12.238121 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 4 23:55:12.238136 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 4 23:55:12.238154 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 4 23:55:12.238179 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 4 23:55:12.238194 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 4 23:55:12.238205 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 4 23:55:12.238214 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 4 23:55:12.238223 systemd[1]: Created slice user.slice - User and Session Slice. Nov 4 23:55:12.238233 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 23:55:12.238248 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 23:55:12.238266 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 4 23:55:12.238298 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 4 23:55:12.239108 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 4 23:55:12.239130 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 23:55:12.239146 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 4 23:55:12.239164 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 23:55:12.239183 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 23:55:12.239202 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 4 23:55:12.239217 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 4 23:55:12.239234 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 4 23:55:12.239251 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 4 23:55:12.239267 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 23:55:12.239293 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 23:55:12.239302 systemd[1]: Reached target slices.target - Slice Units. Nov 4 23:55:12.239311 systemd[1]: Reached target swap.target - Swaps. Nov 4 23:55:12.239320 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 4 23:55:12.239329 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 4 23:55:12.239340 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 4 23:55:12.239349 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 23:55:12.239358 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 23:55:12.239367 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 23:55:12.239376 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 4 23:55:12.239384 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 4 23:55:12.239399 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 4 23:55:12.239418 systemd[1]: Mounting media.mount - External Media Directory... Nov 4 23:55:12.239435 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:55:12.242496 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 4 23:55:12.242515 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 4 23:55:12.242526 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 4 23:55:12.242538 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 4 23:55:12.242550 systemd[1]: Reached target machines.target - Containers. Nov 4 23:55:12.242559 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 4 23:55:12.242568 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:55:12.242577 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 23:55:12.242586 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 4 23:55:12.242596 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:55:12.242605 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 23:55:12.242615 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 23:55:12.242625 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 4 23:55:12.242634 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 23:55:12.242644 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 4 23:55:12.242653 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 4 23:55:12.242662 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 4 23:55:12.242671 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 4 23:55:12.242680 systemd[1]: Stopped systemd-fsck-usr.service. Nov 4 23:55:12.242690 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:55:12.242699 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 23:55:12.242708 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 23:55:12.242718 kernel: fuse: init (API version 7.41) Nov 4 23:55:12.242728 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 23:55:12.242737 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 4 23:55:12.242746 kernel: ACPI: bus type drm_connector registered Nov 4 23:55:12.242755 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 4 23:55:12.242764 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 23:55:12.242774 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:55:12.242804 systemd-journald[1252]: Collecting audit messages is disabled. Nov 4 23:55:12.242824 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 4 23:55:12.242833 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 4 23:55:12.242844 systemd-journald[1252]: Journal started Nov 4 23:55:12.242862 systemd-journald[1252]: Runtime Journal (/run/log/journal/f65a3935a81b4bf4af434eb8fd6de0ed) is 4.7M, max 38.3M, 33.5M free. Nov 4 23:55:11.993909 systemd[1]: Queued start job for default target multi-user.target. Nov 4 23:55:12.010454 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 4 23:55:12.010903 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 4 23:55:12.243678 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 23:55:12.246928 systemd[1]: Mounted media.mount - External Media Directory. Nov 4 23:55:12.247591 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 4 23:55:12.250041 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 4 23:55:12.250574 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 4 23:55:12.251230 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 4 23:55:12.251933 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 23:55:12.252639 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 4 23:55:12.252764 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 4 23:55:12.253436 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:55:12.253587 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:55:12.254354 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 23:55:12.254468 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 23:55:12.255143 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 23:55:12.255339 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 23:55:12.256292 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 4 23:55:12.256455 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 4 23:55:12.257124 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 23:55:12.257323 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 23:55:12.258101 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 23:55:12.259021 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 23:55:12.260626 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 4 23:55:12.261402 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 4 23:55:12.268075 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 23:55:12.269078 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 4 23:55:12.272542 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 4 23:55:12.273719 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 4 23:55:12.275324 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 4 23:55:12.275414 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 23:55:12.276610 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 4 23:55:12.277232 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:55:12.282783 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 4 23:55:12.285422 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 4 23:55:12.285946 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 23:55:12.287349 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 4 23:55:12.287911 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 23:55:12.290401 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 23:55:12.292719 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 4 23:55:12.296625 systemd-journald[1252]: Time spent on flushing to /var/log/journal/f65a3935a81b4bf4af434eb8fd6de0ed is 61.749ms for 1148 entries. Nov 4 23:55:12.296625 systemd-journald[1252]: System Journal (/var/log/journal/f65a3935a81b4bf4af434eb8fd6de0ed) is 8M, max 588.1M, 580.1M free. Nov 4 23:55:12.373912 systemd-journald[1252]: Received client request to flush runtime journal. Nov 4 23:55:12.373945 kernel: loop1: detected capacity change from 0 to 110984 Nov 4 23:55:12.373962 kernel: loop2: detected capacity change from 0 to 8 Nov 4 23:55:12.298307 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 4 23:55:12.300015 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 4 23:55:12.301608 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 4 23:55:12.327159 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 4 23:55:12.327787 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 4 23:55:12.332578 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 4 23:55:12.361881 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 23:55:12.368167 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 23:55:12.374416 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 4 23:55:12.376075 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 4 23:55:12.384502 kernel: loop3: detected capacity change from 0 to 229808 Nov 4 23:55:12.384698 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 4 23:55:12.387429 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 23:55:12.389691 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 23:55:12.405546 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 4 23:55:12.415498 kernel: loop4: detected capacity change from 0 to 128048 Nov 4 23:55:12.421662 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Nov 4 23:55:12.421894 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Nov 4 23:55:12.429683 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 23:55:12.443506 kernel: loop5: detected capacity change from 0 to 110984 Nov 4 23:55:12.448419 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 4 23:55:12.458499 kernel: loop6: detected capacity change from 0 to 8 Nov 4 23:55:12.461510 kernel: loop7: detected capacity change from 0 to 229808 Nov 4 23:55:12.477497 kernel: loop1: detected capacity change from 0 to 128048 Nov 4 23:55:12.498633 (sd-merge)[1310]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-hetzner.raw'. Nov 4 23:55:12.500302 systemd-resolved[1303]: Positive Trust Anchors: Nov 4 23:55:12.500312 systemd-resolved[1303]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 23:55:12.500315 systemd-resolved[1303]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 23:55:12.500341 systemd-resolved[1303]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 23:55:12.503264 (sd-merge)[1310]: Merged extensions into '/usr'. Nov 4 23:55:12.510945 systemd[1]: Reload requested from client PID 1286 ('systemd-sysext') (unit systemd-sysext.service)... Nov 4 23:55:12.510963 systemd[1]: Reloading... Nov 4 23:55:12.518784 systemd-resolved[1303]: Using system hostname 'ci-4487-0-0-n-1c2c5ddea4'. Nov 4 23:55:12.568549 zram_generator::config[1343]: No configuration found. Nov 4 23:55:12.723978 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 4 23:55:12.724294 systemd[1]: Reloading finished in 213 ms. Nov 4 23:55:12.740694 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 23:55:12.741436 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 4 23:55:12.742197 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 4 23:55:12.745031 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 23:55:12.756341 systemd[1]: Starting ensure-sysext.service... Nov 4 23:55:12.759562 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 23:55:12.761573 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 23:55:12.770591 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 4 23:55:12.770931 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 4 23:55:12.771202 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 4 23:55:12.771584 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 4 23:55:12.772289 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 4 23:55:12.772626 systemd-tmpfiles[1388]: ACLs are not supported, ignoring. Nov 4 23:55:12.772740 systemd-tmpfiles[1388]: ACLs are not supported, ignoring. Nov 4 23:55:12.773727 systemd[1]: Reload requested from client PID 1387 ('systemctl') (unit ensure-sysext.service)... Nov 4 23:55:12.773740 systemd[1]: Reloading... Nov 4 23:55:12.777356 systemd-tmpfiles[1388]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 23:55:12.777365 systemd-tmpfiles[1388]: Skipping /boot Nov 4 23:55:12.784731 systemd-tmpfiles[1388]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 23:55:12.784787 systemd-tmpfiles[1388]: Skipping /boot Nov 4 23:55:12.805406 systemd-udevd[1389]: Using default interface naming scheme 'v257'. Nov 4 23:55:12.827496 zram_generator::config[1424]: No configuration found. Nov 4 23:55:12.939501 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Nov 4 23:55:12.959513 kernel: mousedev: PS/2 mouse device common for all mice Nov 4 23:55:12.985484 kernel: ACPI: button: Power Button [PWRF] Nov 4 23:55:13.022131 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 4 23:55:13.022370 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 4 23:55:13.037350 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 4 23:55:13.038580 systemd[1]: Reloading finished in 264 ms. Nov 4 23:55:13.046177 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 23:55:13.062550 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 23:55:13.066499 kernel: EDAC MC: Ver: 3.0.0 Nov 4 23:55:13.095668 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Nov 4 23:55:13.118180 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 4 23:55:13.136536 systemd[1]: Finished ensure-sysext.service. Nov 4 23:55:13.142657 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:55:13.146523 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 23:55:13.148559 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 4 23:55:13.149636 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 23:55:13.151300 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 4 23:55:13.157025 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 23:55:13.159738 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 23:55:13.164640 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 23:55:13.172849 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 23:55:13.177813 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 23:55:13.179089 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 4 23:55:13.180161 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 23:55:13.181048 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 4 23:55:13.195602 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 23:55:13.203429 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Nov 4 23:55:13.202516 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 4 23:55:13.204546 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Nov 4 23:55:13.207750 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 4 23:55:13.211176 kernel: Console: switching to colour dummy device 80x25 Nov 4 23:55:13.211203 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 4 23:55:13.211224 kernel: [drm] features: -context_init Nov 4 23:55:13.215676 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:55:13.215744 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 4 23:55:13.218930 kernel: [drm] number of scanouts: 1 Nov 4 23:55:13.218959 kernel: [drm] number of cap sets: 0 Nov 4 23:55:13.220510 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Nov 4 23:55:13.223361 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 23:55:13.224097 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 23:55:13.224679 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 23:55:13.225192 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 23:55:13.232200 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 4 23:55:13.232236 kernel: Console: switching to colour frame buffer device 160x50 Nov 4 23:55:13.233391 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 23:55:13.234666 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 23:55:13.236491 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 4 23:55:13.242331 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 23:55:13.242854 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 23:55:13.255933 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 23:55:13.256033 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 23:55:13.260431 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:55:13.260883 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:55:13.262965 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:55:13.265429 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 4 23:55:13.279278 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 23:55:13.279422 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:55:13.281209 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 23:55:13.283142 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 4 23:55:13.298200 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 4 23:55:13.311893 augenrules[1561]: No rules Nov 4 23:55:13.313183 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 23:55:13.313345 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 23:55:13.333456 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 4 23:55:13.333707 systemd[1]: Reached target time-set.target - System Time Set. Nov 4 23:55:13.340433 systemd-networkd[1526]: lo: Link UP Nov 4 23:55:13.340439 systemd-networkd[1526]: lo: Gained carrier Nov 4 23:55:13.342294 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 23:55:13.342395 systemd[1]: Reached target network.target - Network. Nov 4 23:55:13.343558 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 4 23:55:13.344600 systemd-networkd[1526]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:55:13.344729 systemd-networkd[1526]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 23:55:13.346689 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 4 23:55:13.347295 systemd-networkd[1526]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:55:13.347301 systemd-networkd[1526]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 23:55:13.348954 systemd-networkd[1526]: eth0: Link UP Nov 4 23:55:13.349806 systemd-networkd[1526]: eth0: Gained carrier Nov 4 23:55:13.349868 systemd-networkd[1526]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:55:13.355325 systemd-networkd[1526]: eth1: Link UP Nov 4 23:55:13.356155 systemd-networkd[1526]: eth1: Gained carrier Nov 4 23:55:13.356172 systemd-networkd[1526]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 23:55:13.367023 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 4 23:55:13.371568 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 4 23:55:13.371906 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 4 23:55:13.385518 systemd-networkd[1526]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Nov 4 23:55:13.386072 systemd-timesyncd[1527]: Network configuration changed, trying to establish connection. Nov 4 23:55:13.399819 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 23:55:13.417538 systemd-networkd[1526]: eth0: DHCPv4 address 46.62.221.150/32, gateway 172.31.1.1 acquired from 172.31.1.1 Nov 4 23:55:13.418291 systemd-timesyncd[1527]: Network configuration changed, trying to establish connection. Nov 4 23:55:13.855238 ldconfig[1513]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 4 23:55:13.860603 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 4 23:55:13.863681 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 4 23:55:13.886175 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 4 23:55:13.886871 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 23:55:13.887738 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 4 23:55:13.888445 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 4 23:55:13.890325 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 4 23:55:13.891302 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 4 23:55:13.892825 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 4 23:55:13.893585 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 4 23:55:13.894217 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 4 23:55:13.894282 systemd[1]: Reached target paths.target - Path Units. Nov 4 23:55:13.895367 systemd[1]: Reached target timers.target - Timer Units. Nov 4 23:55:13.899317 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 4 23:55:13.903726 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 4 23:55:13.909106 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 4 23:55:13.910390 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 4 23:55:13.912454 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 4 23:55:13.924302 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 4 23:55:13.926695 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 4 23:55:13.929519 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 4 23:55:13.933198 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 23:55:13.935680 systemd[1]: Reached target basic.target - Basic System. Nov 4 23:55:13.936585 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 4 23:55:13.936646 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 4 23:55:13.938273 systemd[1]: Starting containerd.service - containerd container runtime... Nov 4 23:55:13.942232 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 4 23:55:13.950569 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 4 23:55:13.955778 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 4 23:55:13.965596 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 4 23:55:13.972840 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 4 23:55:13.974814 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 4 23:55:13.978726 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 4 23:55:13.991684 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 4 23:55:13.995566 jq[1589]: false Nov 4 23:55:13.996350 coreos-metadata[1584]: Nov 04 23:55:13.996 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Nov 4 23:55:13.996607 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 4 23:55:14.008367 coreos-metadata[1584]: Nov 04 23:55:14.005 INFO Fetch successful Nov 4 23:55:14.005369 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Nov 4 23:55:14.011127 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 4 23:55:14.013074 coreos-metadata[1584]: Nov 04 23:55:14.010 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Nov 4 23:55:14.013074 coreos-metadata[1584]: Nov 04 23:55:14.011 INFO Fetch successful Nov 4 23:55:14.017633 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 4 23:55:14.026547 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Refreshing passwd entry cache Nov 4 23:55:14.026744 oslogin_cache_refresh[1591]: Refreshing passwd entry cache Nov 4 23:55:14.027319 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 4 23:55:14.030569 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 4 23:55:14.033168 oslogin_cache_refresh[1591]: Failure getting users, quitting Nov 4 23:55:14.033070 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 4 23:55:14.033928 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Failure getting users, quitting Nov 4 23:55:14.033928 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 23:55:14.033928 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Refreshing group entry cache Nov 4 23:55:14.033183 oslogin_cache_refresh[1591]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 4 23:55:14.033230 oslogin_cache_refresh[1591]: Refreshing group entry cache Nov 4 23:55:14.034993 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Failure getting groups, quitting Nov 4 23:55:14.034993 google_oslogin_nss_cache[1591]: oslogin_cache_refresh[1591]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 23:55:14.034982 oslogin_cache_refresh[1591]: Failure getting groups, quitting Nov 4 23:55:14.034989 oslogin_cache_refresh[1591]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 4 23:55:14.035278 systemd[1]: Starting update-engine.service - Update Engine... Nov 4 23:55:14.039668 extend-filesystems[1590]: Found /dev/sda6 Nov 4 23:55:14.040500 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 4 23:55:14.044000 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 4 23:55:14.047725 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 4 23:55:14.050877 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 4 23:55:14.054122 extend-filesystems[1590]: Found /dev/sda9 Nov 4 23:55:14.051190 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 4 23:55:14.057652 extend-filesystems[1590]: Checking size of /dev/sda9 Nov 4 23:55:14.051355 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 4 23:55:14.055806 systemd[1]: motdgen.service: Deactivated successfully. Nov 4 23:55:14.059946 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 4 23:55:14.060972 jq[1612]: true Nov 4 23:55:14.065760 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 4 23:55:14.065926 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 4 23:55:14.093667 (ntainerd)[1627]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 4 23:55:14.112799 jq[1626]: true Nov 4 23:55:14.114176 extend-filesystems[1590]: Resized partition /dev/sda9 Nov 4 23:55:14.133110 update_engine[1608]: I20251104 23:55:14.131740 1608 main.cc:92] Flatcar Update Engine starting Nov 4 23:55:14.133277 tar[1625]: linux-amd64/LICENSE Nov 4 23:55:14.133277 tar[1625]: linux-amd64/helm Nov 4 23:55:14.136070 extend-filesystems[1648]: resize2fs 1.47.3 (8-Jul-2025) Nov 4 23:55:14.171533 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 8410107 blocks Nov 4 23:55:14.179619 dbus-daemon[1585]: [system] SELinux support is enabled Nov 4 23:55:14.179746 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 4 23:55:14.193401 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 4 23:55:14.193433 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 4 23:55:14.200269 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 4 23:55:14.200286 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 4 23:55:14.219373 systemd-logind[1604]: New seat seat0. Nov 4 23:55:14.237601 update_engine[1608]: I20251104 23:55:14.231280 1608 update_check_scheduler.cc:74] Next update check in 2m40s Nov 4 23:55:14.224066 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 4 23:55:14.230694 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 4 23:55:14.231493 systemd[1]: Started update-engine.service - Update Engine. Nov 4 23:55:14.234401 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 4 23:55:14.239918 systemd-logind[1604]: Watching system buttons on /dev/input/event3 (Power Button) Nov 4 23:55:14.239937 systemd-logind[1604]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 4 23:55:14.240096 systemd[1]: Started systemd-logind.service - User Login Management. Nov 4 23:55:14.385090 bash[1665]: Updated "/home/core/.ssh/authorized_keys" Nov 4 23:55:14.386106 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 4 23:55:14.391601 locksmithd[1668]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 4 23:55:14.394023 systemd[1]: Starting sshkeys.service... Nov 4 23:55:14.430155 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 4 23:55:14.438520 kernel: EXT4-fs (sda9): resized filesystem to 8410107 Nov 4 23:55:14.438686 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 4 23:55:14.460752 extend-filesystems[1648]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 4 23:55:14.460752 extend-filesystems[1648]: old_desc_blocks = 1, new_desc_blocks = 5 Nov 4 23:55:14.460752 extend-filesystems[1648]: The filesystem on /dev/sda9 is now 8410107 (4k) blocks long. Nov 4 23:55:14.476920 extend-filesystems[1590]: Resized filesystem in /dev/sda9 Nov 4 23:55:14.480315 coreos-metadata[1682]: Nov 04 23:55:14.468 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Nov 4 23:55:14.480315 coreos-metadata[1682]: Nov 04 23:55:14.468 INFO Fetch successful Nov 4 23:55:14.467762 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 4 23:55:14.467936 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 4 23:55:14.478160 unknown[1682]: wrote ssh authorized keys file for user: core Nov 4 23:55:14.488269 containerd[1627]: time="2025-11-04T23:55:14Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 4 23:55:14.488269 containerd[1627]: time="2025-11-04T23:55:14.488161571Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 4 23:55:14.498880 containerd[1627]: time="2025-11-04T23:55:14.498853178Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.955µs" Nov 4 23:55:14.499067 containerd[1627]: time="2025-11-04T23:55:14.499038937Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 4 23:55:14.499183 containerd[1627]: time="2025-11-04T23:55:14.499170303Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 4 23:55:14.499410 containerd[1627]: time="2025-11-04T23:55:14.499394693Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 4 23:55:14.499672 containerd[1627]: time="2025-11-04T23:55:14.499659500Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 4 23:55:14.499766 containerd[1627]: time="2025-11-04T23:55:14.499749098Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 23:55:14.499887 containerd[1627]: time="2025-11-04T23:55:14.499867380Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 23:55:14.499949 containerd[1627]: time="2025-11-04T23:55:14.499934366Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 23:55:14.500188 containerd[1627]: time="2025-11-04T23:55:14.500168825Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 23:55:14.500273 containerd[1627]: time="2025-11-04T23:55:14.500241872Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 23:55:14.500356 containerd[1627]: time="2025-11-04T23:55:14.500337141Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 23:55:14.500404 containerd[1627]: time="2025-11-04T23:55:14.500393737Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 4 23:55:14.500567 containerd[1627]: time="2025-11-04T23:55:14.500546413Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 4 23:55:14.501507 containerd[1627]: time="2025-11-04T23:55:14.501111543Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 23:55:14.501507 containerd[1627]: time="2025-11-04T23:55:14.501140317Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 23:55:14.501507 containerd[1627]: time="2025-11-04T23:55:14.501149404Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 4 23:55:14.501507 containerd[1627]: time="2025-11-04T23:55:14.501171545Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 4 23:55:14.501507 containerd[1627]: time="2025-11-04T23:55:14.501344690Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 4 23:55:14.501507 containerd[1627]: time="2025-11-04T23:55:14.501401416Z" level=info msg="metadata content store policy set" policy=shared Nov 4 23:55:14.505596 containerd[1627]: time="2025-11-04T23:55:14.505495253Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 4 23:55:14.505596 containerd[1627]: time="2025-11-04T23:55:14.505539987Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 4 23:55:14.505596 containerd[1627]: time="2025-11-04T23:55:14.505557960Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 4 23:55:14.505596 containerd[1627]: time="2025-11-04T23:55:14.505567949Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 4 23:55:14.505596 containerd[1627]: time="2025-11-04T23:55:14.505576826Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 4 23:55:14.506075 containerd[1627]: time="2025-11-04T23:55:14.505717389Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 4 23:55:14.506075 containerd[1627]: time="2025-11-04T23:55:14.505737747Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 4 23:55:14.506075 containerd[1627]: time="2025-11-04T23:55:14.505746945Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 4 23:55:14.506075 containerd[1627]: time="2025-11-04T23:55:14.505754689Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 4 23:55:14.506075 containerd[1627]: time="2025-11-04T23:55:14.505768465Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 4 23:55:14.506075 containerd[1627]: time="2025-11-04T23:55:14.505775318Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 4 23:55:14.506075 containerd[1627]: time="2025-11-04T23:55:14.505784194Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 4 23:55:14.506075 containerd[1627]: time="2025-11-04T23:55:14.505856380Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 4 23:55:14.506075 containerd[1627]: time="2025-11-04T23:55:14.505872239Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 4 23:55:14.506075 containerd[1627]: time="2025-11-04T23:55:14.505883941Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 4 23:55:14.506075 containerd[1627]: time="2025-11-04T23:55:14.505898660Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 4 23:55:14.506075 containerd[1627]: time="2025-11-04T23:55:14.505906654Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 4 23:55:14.506075 containerd[1627]: time="2025-11-04T23:55:14.505918606Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 4 23:55:14.506075 containerd[1627]: time="2025-11-04T23:55:14.505927403Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 4 23:55:14.506075 containerd[1627]: time="2025-11-04T23:55:14.505935508Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 4 23:55:14.506300 containerd[1627]: time="2025-11-04T23:55:14.505943143Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 4 23:55:14.506300 containerd[1627]: time="2025-11-04T23:55:14.505950807Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 4 23:55:14.506300 containerd[1627]: time="2025-11-04T23:55:14.505958301Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 4 23:55:14.506300 containerd[1627]: time="2025-11-04T23:55:14.506000941Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 4 23:55:14.506300 containerd[1627]: time="2025-11-04T23:55:14.506010569Z" level=info msg="Start snapshots syncer" Nov 4 23:55:14.506559 containerd[1627]: time="2025-11-04T23:55:14.506542286Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 4 23:55:14.507054 containerd[1627]: time="2025-11-04T23:55:14.507026705Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 4 23:55:14.507212 containerd[1627]: time="2025-11-04T23:55:14.507197475Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 4 23:55:14.507623 containerd[1627]: time="2025-11-04T23:55:14.507518246Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 4 23:55:14.507967 containerd[1627]: time="2025-11-04T23:55:14.507907426Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 4 23:55:14.507967 containerd[1627]: time="2025-11-04T23:55:14.507932382Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 4 23:55:14.508089 containerd[1627]: time="2025-11-04T23:55:14.508017873Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 4 23:55:14.508089 containerd[1627]: time="2025-11-04T23:55:14.508038712Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 4 23:55:14.508089 containerd[1627]: time="2025-11-04T23:55:14.508049222Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 4 23:55:14.508089 containerd[1627]: time="2025-11-04T23:55:14.508058079Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 4 23:55:14.508575 containerd[1627]: time="2025-11-04T23:55:14.508066865Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 4 23:55:14.508575 containerd[1627]: time="2025-11-04T23:55:14.508191038Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 4 23:55:14.508575 containerd[1627]: time="2025-11-04T23:55:14.508412994Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 4 23:55:14.508575 containerd[1627]: time="2025-11-04T23:55:14.508497322Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 4 23:55:14.508575 containerd[1627]: time="2025-11-04T23:55:14.508529572Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 23:55:14.508575 containerd[1627]: time="2025-11-04T23:55:14.508542217Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 23:55:14.508756 containerd[1627]: time="2025-11-04T23:55:14.508548608Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 23:55:14.508756 containerd[1627]: time="2025-11-04T23:55:14.508735358Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 23:55:14.508940 containerd[1627]: time="2025-11-04T23:55:14.508887394Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 4 23:55:14.509110 containerd[1627]: time="2025-11-04T23:55:14.508980809Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 4 23:55:14.509110 containerd[1627]: time="2025-11-04T23:55:14.508996248Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 4 23:55:14.509110 containerd[1627]: time="2025-11-04T23:55:14.509010374Z" level=info msg="runtime interface created" Nov 4 23:55:14.509110 containerd[1627]: time="2025-11-04T23:55:14.509014231Z" level=info msg="created NRI interface" Nov 4 23:55:14.509110 containerd[1627]: time="2025-11-04T23:55:14.509020754Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 4 23:55:14.509110 containerd[1627]: time="2025-11-04T23:55:14.509029610Z" level=info msg="Connect containerd service" Nov 4 23:55:14.509110 containerd[1627]: time="2025-11-04T23:55:14.509052333Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 4 23:55:14.510752 containerd[1627]: time="2025-11-04T23:55:14.510574336Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 4 23:55:14.511009 update-ssh-keys[1688]: Updated "/home/core/.ssh/authorized_keys" Nov 4 23:55:14.511869 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 4 23:55:14.533202 systemd[1]: Finished sshkeys.service. Nov 4 23:55:14.553647 systemd-networkd[1526]: eth0: Gained IPv6LL Nov 4 23:55:14.554073 systemd-timesyncd[1527]: Network configuration changed, trying to establish connection. Nov 4 23:55:14.557611 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 4 23:55:14.559611 systemd[1]: Reached target network-online.target - Network is Online. Nov 4 23:55:14.567783 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:55:14.575810 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 4 23:55:14.653082 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 4 23:55:14.726936 containerd[1627]: time="2025-11-04T23:55:14.726827423Z" level=info msg="Start subscribing containerd event" Nov 4 23:55:14.727180 containerd[1627]: time="2025-11-04T23:55:14.727091148Z" level=info msg="Start recovering state" Nov 4 23:55:14.727884 containerd[1627]: time="2025-11-04T23:55:14.727870549Z" level=info msg="Start event monitor" Nov 4 23:55:14.728259 containerd[1627]: time="2025-11-04T23:55:14.728217239Z" level=info msg="Start cni network conf syncer for default" Nov 4 23:55:14.728365 containerd[1627]: time="2025-11-04T23:55:14.728324440Z" level=info msg="Start streaming server" Nov 4 23:55:14.728569 containerd[1627]: time="2025-11-04T23:55:14.728432633Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 4 23:55:14.728569 containerd[1627]: time="2025-11-04T23:55:14.728443263Z" level=info msg="runtime interface starting up..." Nov 4 23:55:14.728569 containerd[1627]: time="2025-11-04T23:55:14.728449014Z" level=info msg="starting plugins..." Nov 4 23:55:14.728569 containerd[1627]: time="2025-11-04T23:55:14.728461297Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 4 23:55:14.729714 containerd[1627]: time="2025-11-04T23:55:14.728790805Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 4 23:55:14.729714 containerd[1627]: time="2025-11-04T23:55:14.728830079Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 4 23:55:14.731419 systemd[1]: Started containerd.service - containerd container runtime. Nov 4 23:55:14.733172 containerd[1627]: time="2025-11-04T23:55:14.732976323Z" level=info msg="containerd successfully booted in 0.247739s" Nov 4 23:55:14.745880 systemd-networkd[1526]: eth1: Gained IPv6LL Nov 4 23:55:14.746456 systemd-timesyncd[1527]: Network configuration changed, trying to establish connection. Nov 4 23:55:14.807791 tar[1625]: linux-amd64/README.md Nov 4 23:55:14.821862 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 4 23:55:14.952588 sshd_keygen[1618]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 4 23:55:14.969677 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 4 23:55:14.972710 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 4 23:55:14.986901 systemd[1]: issuegen.service: Deactivated successfully. Nov 4 23:55:14.987499 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 4 23:55:14.990658 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 4 23:55:15.002211 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 4 23:55:15.008677 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 4 23:55:15.010153 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 4 23:55:15.011987 systemd[1]: Reached target getty.target - Login Prompts. Nov 4 23:55:15.638373 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:55:15.640055 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 4 23:55:15.644672 systemd[1]: Startup finished in 2.650s (kernel) + 5.679s (initrd) + 4.276s (userspace) = 12.606s. Nov 4 23:55:15.650845 (kubelet)[1741]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:55:16.557840 kubelet[1741]: E1104 23:55:16.557734 1741 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:55:16.561155 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:55:16.561435 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:55:16.561966 systemd[1]: kubelet.service: Consumed 1.207s CPU time, 267.1M memory peak. Nov 4 23:55:24.476574 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 4 23:55:24.479401 systemd[1]: Started sshd@0-46.62.221.150:22-147.75.109.163:41716.service - OpenSSH per-connection server daemon (147.75.109.163:41716). Nov 4 23:55:25.643879 sshd[1753]: Accepted publickey for core from 147.75.109.163 port 41716 ssh2: RSA SHA256:2nrKbouutsjVTOVoe49KHBpiak20yOpS7DxBbxhQEyE Nov 4 23:55:25.647054 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:25.657937 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 4 23:55:25.660305 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 4 23:55:25.676159 systemd-logind[1604]: New session 1 of user core. Nov 4 23:55:25.687058 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 4 23:55:25.690945 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 4 23:55:25.708637 (systemd)[1758]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 4 23:55:25.711938 systemd-logind[1604]: New session c1 of user core. Nov 4 23:55:25.875423 systemd[1758]: Queued start job for default target default.target. Nov 4 23:55:25.882195 systemd[1758]: Created slice app.slice - User Application Slice. Nov 4 23:55:25.882220 systemd[1758]: Reached target paths.target - Paths. Nov 4 23:55:25.882255 systemd[1758]: Reached target timers.target - Timers. Nov 4 23:55:25.883706 systemd[1758]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 4 23:55:25.894603 systemd[1758]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 4 23:55:25.894810 systemd[1758]: Reached target sockets.target - Sockets. Nov 4 23:55:25.894901 systemd[1758]: Reached target basic.target - Basic System. Nov 4 23:55:25.894960 systemd[1758]: Reached target default.target - Main User Target. Nov 4 23:55:25.895001 systemd[1758]: Startup finished in 173ms. Nov 4 23:55:25.895282 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 4 23:55:25.905771 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 4 23:55:26.649020 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 4 23:55:26.652088 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:55:26.654823 systemd[1]: Started sshd@1-46.62.221.150:22-147.75.109.163:41718.service - OpenSSH per-connection server daemon (147.75.109.163:41718). Nov 4 23:55:26.804362 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:55:26.808733 (kubelet)[1780]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:55:26.861178 kubelet[1780]: E1104 23:55:26.861082 1780 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:55:26.866807 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:55:26.867014 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:55:26.867535 systemd[1]: kubelet.service: Consumed 186ms CPU time, 110.7M memory peak. Nov 4 23:55:27.672179 sshd[1770]: Accepted publickey for core from 147.75.109.163 port 41718 ssh2: RSA SHA256:2nrKbouutsjVTOVoe49KHBpiak20yOpS7DxBbxhQEyE Nov 4 23:55:27.673443 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:27.678373 systemd-logind[1604]: New session 2 of user core. Nov 4 23:55:27.684634 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 4 23:55:28.363510 sshd[1787]: Connection closed by 147.75.109.163 port 41718 Nov 4 23:55:28.364382 sshd-session[1770]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:28.370949 systemd[1]: sshd@1-46.62.221.150:22-147.75.109.163:41718.service: Deactivated successfully. Nov 4 23:55:28.371029 systemd-logind[1604]: Session 2 logged out. Waiting for processes to exit. Nov 4 23:55:28.373311 systemd[1]: session-2.scope: Deactivated successfully. Nov 4 23:55:28.375916 systemd-logind[1604]: Removed session 2. Nov 4 23:55:28.535514 systemd[1]: Started sshd@2-46.62.221.150:22-147.75.109.163:41726.service - OpenSSH per-connection server daemon (147.75.109.163:41726). Nov 4 23:55:29.558380 sshd[1793]: Accepted publickey for core from 147.75.109.163 port 41726 ssh2: RSA SHA256:2nrKbouutsjVTOVoe49KHBpiak20yOpS7DxBbxhQEyE Nov 4 23:55:29.559561 sshd-session[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:29.564627 systemd-logind[1604]: New session 3 of user core. Nov 4 23:55:29.571590 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 4 23:55:30.255524 sshd[1796]: Connection closed by 147.75.109.163 port 41726 Nov 4 23:55:30.256295 sshd-session[1793]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:30.262028 systemd-logind[1604]: Session 3 logged out. Waiting for processes to exit. Nov 4 23:55:30.262176 systemd[1]: sshd@2-46.62.221.150:22-147.75.109.163:41726.service: Deactivated successfully. Nov 4 23:55:30.264428 systemd[1]: session-3.scope: Deactivated successfully. Nov 4 23:55:30.267010 systemd-logind[1604]: Removed session 3. Nov 4 23:55:30.429262 systemd[1]: Started sshd@3-46.62.221.150:22-147.75.109.163:44850.service - OpenSSH per-connection server daemon (147.75.109.163:44850). Nov 4 23:55:31.445358 sshd[1802]: Accepted publickey for core from 147.75.109.163 port 44850 ssh2: RSA SHA256:2nrKbouutsjVTOVoe49KHBpiak20yOpS7DxBbxhQEyE Nov 4 23:55:31.446657 sshd-session[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:31.451524 systemd-logind[1604]: New session 4 of user core. Nov 4 23:55:31.464600 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 4 23:55:32.134208 sshd[1805]: Connection closed by 147.75.109.163 port 44850 Nov 4 23:55:32.134953 sshd-session[1802]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:32.139540 systemd[1]: sshd@3-46.62.221.150:22-147.75.109.163:44850.service: Deactivated successfully. Nov 4 23:55:32.141739 systemd[1]: session-4.scope: Deactivated successfully. Nov 4 23:55:32.144115 systemd-logind[1604]: Session 4 logged out. Waiting for processes to exit. Nov 4 23:55:32.145720 systemd-logind[1604]: Removed session 4. Nov 4 23:55:32.350218 systemd[1]: Started sshd@4-46.62.221.150:22-147.75.109.163:44854.service - OpenSSH per-connection server daemon (147.75.109.163:44854). Nov 4 23:55:33.499861 sshd[1811]: Accepted publickey for core from 147.75.109.163 port 44854 ssh2: RSA SHA256:2nrKbouutsjVTOVoe49KHBpiak20yOpS7DxBbxhQEyE Nov 4 23:55:33.501086 sshd-session[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:33.505809 systemd-logind[1604]: New session 5 of user core. Nov 4 23:55:33.513604 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 4 23:55:34.101412 sudo[1815]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 4 23:55:34.101671 sudo[1815]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:55:34.116512 sudo[1815]: pam_unix(sudo:session): session closed for user root Nov 4 23:55:34.299258 sshd[1814]: Connection closed by 147.75.109.163 port 44854 Nov 4 23:55:34.299964 sshd-session[1811]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:34.303910 systemd-logind[1604]: Session 5 logged out. Waiting for processes to exit. Nov 4 23:55:34.304520 systemd[1]: sshd@4-46.62.221.150:22-147.75.109.163:44854.service: Deactivated successfully. Nov 4 23:55:34.305910 systemd[1]: session-5.scope: Deactivated successfully. Nov 4 23:55:34.307462 systemd-logind[1604]: Removed session 5. Nov 4 23:55:34.458707 systemd[1]: Started sshd@5-46.62.221.150:22-147.75.109.163:44856.service - OpenSSH per-connection server daemon (147.75.109.163:44856). Nov 4 23:55:35.495958 sshd[1821]: Accepted publickey for core from 147.75.109.163 port 44856 ssh2: RSA SHA256:2nrKbouutsjVTOVoe49KHBpiak20yOpS7DxBbxhQEyE Nov 4 23:55:35.497380 sshd-session[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:35.502431 systemd-logind[1604]: New session 6 of user core. Nov 4 23:55:35.509631 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 4 23:55:36.025380 sudo[1826]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 4 23:55:36.025685 sudo[1826]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:55:36.029883 sudo[1826]: pam_unix(sudo:session): session closed for user root Nov 4 23:55:36.035355 sudo[1825]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 4 23:55:36.035600 sudo[1825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:55:36.044179 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 23:55:36.091808 augenrules[1848]: No rules Nov 4 23:55:36.092710 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 23:55:36.093152 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 23:55:36.095378 sudo[1825]: pam_unix(sudo:session): session closed for user root Nov 4 23:55:36.257510 sshd[1824]: Connection closed by 147.75.109.163 port 44856 Nov 4 23:55:36.258161 sshd-session[1821]: pam_unix(sshd:session): session closed for user core Nov 4 23:55:36.262888 systemd-logind[1604]: Session 6 logged out. Waiting for processes to exit. Nov 4 23:55:36.263956 systemd[1]: sshd@5-46.62.221.150:22-147.75.109.163:44856.service: Deactivated successfully. Nov 4 23:55:36.266393 systemd[1]: session-6.scope: Deactivated successfully. Nov 4 23:55:36.268463 systemd-logind[1604]: Removed session 6. Nov 4 23:55:36.437260 systemd[1]: Started sshd@6-46.62.221.150:22-147.75.109.163:44872.service - OpenSSH per-connection server daemon (147.75.109.163:44872). Nov 4 23:55:37.117985 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 4 23:55:37.120826 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:55:37.286416 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:55:37.297935 (kubelet)[1868]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:55:37.361380 kubelet[1868]: E1104 23:55:37.361303 1868 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:55:37.365222 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:55:37.365533 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:55:37.366143 systemd[1]: kubelet.service: Consumed 195ms CPU time, 108M memory peak. Nov 4 23:55:37.468190 sshd[1857]: Accepted publickey for core from 147.75.109.163 port 44872 ssh2: RSA SHA256:2nrKbouutsjVTOVoe49KHBpiak20yOpS7DxBbxhQEyE Nov 4 23:55:37.470154 sshd-session[1857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:55:37.477572 systemd-logind[1604]: New session 7 of user core. Nov 4 23:55:37.502878 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 4 23:55:37.998703 sudo[1876]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 4 23:55:37.998942 sudo[1876]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 23:55:38.340714 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 4 23:55:38.361853 (dockerd)[1894]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 4 23:55:38.600958 dockerd[1894]: time="2025-11-04T23:55:38.600634212Z" level=info msg="Starting up" Nov 4 23:55:38.601723 dockerd[1894]: time="2025-11-04T23:55:38.601700451Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 4 23:55:38.610756 dockerd[1894]: time="2025-11-04T23:55:38.610697452Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 4 23:55:38.646907 dockerd[1894]: time="2025-11-04T23:55:38.646872914Z" level=info msg="Loading containers: start." Nov 4 23:55:38.657507 kernel: Initializing XFRM netlink socket Nov 4 23:55:38.810853 systemd-timesyncd[1527]: Network configuration changed, trying to establish connection. Nov 4 23:55:38.840119 systemd-networkd[1526]: docker0: Link UP Nov 4 23:55:38.845295 dockerd[1894]: time="2025-11-04T23:55:38.845244311Z" level=info msg="Loading containers: done." Nov 4 23:55:38.857113 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3059701885-merged.mount: Deactivated successfully. Nov 4 23:55:38.859819 dockerd[1894]: time="2025-11-04T23:55:38.859775919Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 4 23:55:38.859888 dockerd[1894]: time="2025-11-04T23:55:38.859855518Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 4 23:55:38.859950 dockerd[1894]: time="2025-11-04T23:55:38.859924437Z" level=info msg="Initializing buildkit" Nov 4 23:55:38.881359 dockerd[1894]: time="2025-11-04T23:55:38.881122867Z" level=info msg="Completed buildkit initialization" Nov 4 23:55:38.889419 dockerd[1894]: time="2025-11-04T23:55:38.889379198Z" level=info msg="Daemon has completed initialization" Nov 4 23:55:38.889638 dockerd[1894]: time="2025-11-04T23:55:38.889596616Z" level=info msg="API listen on /run/docker.sock" Nov 4 23:55:38.889677 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 4 23:55:39.080560 systemd-timesyncd[1527]: Contacted time server 94.16.122.152:123 (2.flatcar.pool.ntp.org). Nov 4 23:55:39.080712 systemd-timesyncd[1527]: Initial clock synchronization to Tue 2025-11-04 23:55:39.004846 UTC. Nov 4 23:55:40.127216 containerd[1627]: time="2025-11-04T23:55:40.127174542Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 4 23:55:40.697757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3052887234.mount: Deactivated successfully. Nov 4 23:55:41.750344 containerd[1627]: time="2025-11-04T23:55:41.750282172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:41.751394 containerd[1627]: time="2025-11-04T23:55:41.751239565Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114993" Nov 4 23:55:41.752300 containerd[1627]: time="2025-11-04T23:55:41.752272184Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:41.754712 containerd[1627]: time="2025-11-04T23:55:41.754675545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:41.755684 containerd[1627]: time="2025-11-04T23:55:41.755257234Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 1.628046837s" Nov 4 23:55:41.755684 containerd[1627]: time="2025-11-04T23:55:41.755286624Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 4 23:55:41.755948 containerd[1627]: time="2025-11-04T23:55:41.755932770Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 4 23:55:43.031510 containerd[1627]: time="2025-11-04T23:55:43.031258745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:43.032491 containerd[1627]: time="2025-11-04T23:55:43.032265941Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020866" Nov 4 23:55:43.033354 containerd[1627]: time="2025-11-04T23:55:43.033328190Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:43.035389 containerd[1627]: time="2025-11-04T23:55:43.035359405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:43.036300 containerd[1627]: time="2025-11-04T23:55:43.036268858Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.280266981s" Nov 4 23:55:43.036342 containerd[1627]: time="2025-11-04T23:55:43.036300278Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 4 23:55:43.036873 containerd[1627]: time="2025-11-04T23:55:43.036759763Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 4 23:55:44.274377 containerd[1627]: time="2025-11-04T23:55:44.274233762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:44.276085 containerd[1627]: time="2025-11-04T23:55:44.276019770Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155590" Nov 4 23:55:44.277873 containerd[1627]: time="2025-11-04T23:55:44.277274140Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:44.281707 containerd[1627]: time="2025-11-04T23:55:44.281655621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:44.283377 containerd[1627]: time="2025-11-04T23:55:44.283334323Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.24654312s" Nov 4 23:55:44.283585 containerd[1627]: time="2025-11-04T23:55:44.283553238Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 4 23:55:44.284738 containerd[1627]: time="2025-11-04T23:55:44.284670612Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 4 23:55:45.237016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3345544457.mount: Deactivated successfully. Nov 4 23:55:45.557196 containerd[1627]: time="2025-11-04T23:55:45.557143729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:45.558417 containerd[1627]: time="2025-11-04T23:55:45.558202516Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929497" Nov 4 23:55:45.559316 containerd[1627]: time="2025-11-04T23:55:45.559289909Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:45.561160 containerd[1627]: time="2025-11-04T23:55:45.561131057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:45.561674 containerd[1627]: time="2025-11-04T23:55:45.561646881Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.276920628s" Nov 4 23:55:45.561743 containerd[1627]: time="2025-11-04T23:55:45.561730084Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 4 23:55:45.562252 containerd[1627]: time="2025-11-04T23:55:45.562225569Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 4 23:55:46.059658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount424717344.mount: Deactivated successfully. Nov 4 23:55:46.929517 containerd[1627]: time="2025-11-04T23:55:46.929434478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:46.930776 containerd[1627]: time="2025-11-04T23:55:46.930423775Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942332" Nov 4 23:55:46.931649 containerd[1627]: time="2025-11-04T23:55:46.931623094Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:46.934110 containerd[1627]: time="2025-11-04T23:55:46.934085862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:46.934988 containerd[1627]: time="2025-11-04T23:55:46.934968750Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.372714095s" Nov 4 23:55:46.935070 containerd[1627]: time="2025-11-04T23:55:46.935053361Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 4 23:55:46.935835 containerd[1627]: time="2025-11-04T23:55:46.935805878Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 4 23:55:47.372988 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 4 23:55:47.376002 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:55:47.388123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1631426917.mount: Deactivated successfully. Nov 4 23:55:47.399690 containerd[1627]: time="2025-11-04T23:55:47.399608092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:55:47.401215 containerd[1627]: time="2025-11-04T23:55:47.401179267Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Nov 4 23:55:47.404656 containerd[1627]: time="2025-11-04T23:55:47.404577159Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:55:47.409577 containerd[1627]: time="2025-11-04T23:55:47.409444373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 23:55:47.411535 containerd[1627]: time="2025-11-04T23:55:47.411429888Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 475.583876ms" Nov 4 23:55:47.411642 containerd[1627]: time="2025-11-04T23:55:47.411557234Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 4 23:55:47.413077 containerd[1627]: time="2025-11-04T23:55:47.413014070Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 4 23:55:47.519751 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:55:47.529861 (kubelet)[2242]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 23:55:47.569872 kubelet[2242]: E1104 23:55:47.569817 2242 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 23:55:47.572536 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 23:55:47.572655 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 23:55:47.572904 systemd[1]: kubelet.service: Consumed 145ms CPU time, 107.6M memory peak. Nov 4 23:55:47.844681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1485584655.mount: Deactivated successfully. Nov 4 23:55:50.469751 containerd[1627]: time="2025-11-04T23:55:50.469702031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:50.471008 containerd[1627]: time="2025-11-04T23:55:50.470962445Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378491" Nov 4 23:55:50.487865 containerd[1627]: time="2025-11-04T23:55:50.487803248Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:50.490524 containerd[1627]: time="2025-11-04T23:55:50.490489704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:55:50.491506 containerd[1627]: time="2025-11-04T23:55:50.491339848Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.078283357s" Nov 4 23:55:50.491506 containerd[1627]: time="2025-11-04T23:55:50.491366571Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 4 23:55:54.388233 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:55:54.389251 systemd[1]: kubelet.service: Consumed 145ms CPU time, 107.6M memory peak. Nov 4 23:55:54.392781 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:55:54.436816 systemd[1]: Reload requested from client PID 2332 ('systemctl') (unit session-7.scope)... Nov 4 23:55:54.436838 systemd[1]: Reloading... Nov 4 23:55:54.536496 zram_generator::config[2380]: No configuration found. Nov 4 23:55:54.717926 systemd[1]: Reloading finished in 280 ms. Nov 4 23:55:54.754029 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 4 23:55:54.754102 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 4 23:55:54.754324 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:55:54.754357 systemd[1]: kubelet.service: Consumed 96ms CPU time, 97.9M memory peak. Nov 4 23:55:54.755890 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:55:54.915651 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:55:54.921957 (kubelet)[2431]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 23:55:54.984147 kubelet[2431]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:55:54.985578 kubelet[2431]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 23:55:54.985578 kubelet[2431]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:55:54.985578 kubelet[2431]: I1104 23:55:54.984671 2431 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 23:55:55.261792 kubelet[2431]: I1104 23:55:55.260570 2431 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 4 23:55:55.261792 kubelet[2431]: I1104 23:55:55.260614 2431 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 23:55:55.261792 kubelet[2431]: I1104 23:55:55.261191 2431 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 23:55:55.338038 kubelet[2431]: I1104 23:55:55.337972 2431 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 23:55:55.340810 kubelet[2431]: E1104 23:55:55.340755 2431 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://46.62.221.150:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 46.62.221.150:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 4 23:55:55.367499 kubelet[2431]: I1104 23:55:55.367453 2431 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 23:55:55.375054 kubelet[2431]: I1104 23:55:55.374996 2431 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 4 23:55:55.378495 kubelet[2431]: I1104 23:55:55.378388 2431 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 23:55:55.384232 kubelet[2431]: I1104 23:55:55.378462 2431 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487-0-0-n-1c2c5ddea4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 23:55:55.387511 kubelet[2431]: I1104 23:55:55.386904 2431 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 23:55:55.387511 kubelet[2431]: I1104 23:55:55.386981 2431 container_manager_linux.go:303] "Creating device plugin manager" Nov 4 23:55:55.389714 kubelet[2431]: I1104 23:55:55.389684 2431 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:55:55.394241 kubelet[2431]: I1104 23:55:55.394177 2431 kubelet.go:480] "Attempting to sync node with API server" Nov 4 23:55:55.394241 kubelet[2431]: I1104 23:55:55.394230 2431 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 23:55:55.396663 kubelet[2431]: I1104 23:55:55.396050 2431 kubelet.go:386] "Adding apiserver pod source" Nov 4 23:55:55.399398 kubelet[2431]: I1104 23:55:55.398647 2431 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 23:55:55.412885 kubelet[2431]: E1104 23:55:55.412816 2431 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://46.62.221.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487-0-0-n-1c2c5ddea4&limit=500&resourceVersion=0\": dial tcp 46.62.221.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 23:55:55.413188 kubelet[2431]: I1104 23:55:55.413164 2431 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 4 23:55:55.414521 kubelet[2431]: I1104 23:55:55.414465 2431 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 23:55:55.417509 kubelet[2431]: W1104 23:55:55.417461 2431 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 4 23:55:55.428342 kubelet[2431]: I1104 23:55:55.428316 2431 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 4 23:55:55.428596 kubelet[2431]: I1104 23:55:55.428577 2431 server.go:1289] "Started kubelet" Nov 4 23:55:55.438545 kubelet[2431]: E1104 23:55:55.438460 2431 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://46.62.221.150:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.62.221.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 23:55:55.440681 kubelet[2431]: I1104 23:55:55.440612 2431 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 23:55:55.441011 kubelet[2431]: I1104 23:55:55.440958 2431 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 23:55:55.441640 kubelet[2431]: I1104 23:55:55.441615 2431 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 23:55:55.450259 kubelet[2431]: I1104 23:55:55.450213 2431 server.go:317] "Adding debug handlers to kubelet server" Nov 4 23:55:55.452720 kubelet[2431]: E1104 23:55:55.445769 2431 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://46.62.221.150:6443/api/v1/namespaces/default/events\": dial tcp 46.62.221.150:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4487-0-0-n-1c2c5ddea4.1874f304a1c957a8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4487-0-0-n-1c2c5ddea4,UID:ci-4487-0-0-n-1c2c5ddea4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4487-0-0-n-1c2c5ddea4,},FirstTimestamp:2025-11-04 23:55:55.428452264 +0000 UTC m=+0.500739460,LastTimestamp:2025-11-04 23:55:55.428452264 +0000 UTC m=+0.500739460,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4487-0-0-n-1c2c5ddea4,}" Nov 4 23:55:55.454334 kubelet[2431]: I1104 23:55:55.454309 2431 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 23:55:55.455189 kubelet[2431]: I1104 23:55:55.455137 2431 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 23:55:55.466672 kubelet[2431]: E1104 23:55:55.466617 2431 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487-0-0-n-1c2c5ddea4\" not found" Nov 4 23:55:55.466798 kubelet[2431]: I1104 23:55:55.466682 2431 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 4 23:55:55.468095 kubelet[2431]: I1104 23:55:55.468054 2431 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 4 23:55:55.468181 kubelet[2431]: I1104 23:55:55.468157 2431 reconciler.go:26] "Reconciler: start to sync state" Nov 4 23:55:55.469104 kubelet[2431]: E1104 23:55:55.469060 2431 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://46.62.221.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 46.62.221.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 23:55:55.469526 kubelet[2431]: E1104 23:55:55.469421 2431 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 23:55:55.470056 kubelet[2431]: I1104 23:55:55.469961 2431 factory.go:223] Registration of the systemd container factory successfully Nov 4 23:55:55.470265 kubelet[2431]: I1104 23:55:55.470075 2431 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 23:55:55.472545 kubelet[2431]: E1104 23:55:55.472227 2431 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.221.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487-0-0-n-1c2c5ddea4?timeout=10s\": dial tcp 46.62.221.150:6443: connect: connection refused" interval="200ms" Nov 4 23:55:55.472931 kubelet[2431]: I1104 23:55:55.472777 2431 factory.go:223] Registration of the containerd container factory successfully Nov 4 23:55:55.494876 kubelet[2431]: I1104 23:55:55.494832 2431 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 23:55:55.495056 kubelet[2431]: I1104 23:55:55.495043 2431 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 23:55:55.495128 kubelet[2431]: I1104 23:55:55.495119 2431 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:55:55.501034 kubelet[2431]: I1104 23:55:55.501011 2431 policy_none.go:49] "None policy: Start" Nov 4 23:55:55.501166 kubelet[2431]: I1104 23:55:55.501150 2431 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 4 23:55:55.501251 kubelet[2431]: I1104 23:55:55.501240 2431 state_mem.go:35] "Initializing new in-memory state store" Nov 4 23:55:55.504635 kubelet[2431]: I1104 23:55:55.504609 2431 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 4 23:55:55.508181 kubelet[2431]: I1104 23:55:55.508126 2431 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 4 23:55:55.508620 kubelet[2431]: I1104 23:55:55.508536 2431 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 4 23:55:55.509051 kubelet[2431]: I1104 23:55:55.508720 2431 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 23:55:55.509051 kubelet[2431]: I1104 23:55:55.508735 2431 kubelet.go:2436] "Starting kubelet main sync loop" Nov 4 23:55:55.509051 kubelet[2431]: E1104 23:55:55.508801 2431 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 23:55:55.513555 kubelet[2431]: E1104 23:55:55.512885 2431 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://46.62.221.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 46.62.221.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 23:55:55.517458 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 4 23:55:55.530893 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 4 23:55:55.536334 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 4 23:55:55.547879 kubelet[2431]: E1104 23:55:55.547774 2431 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 23:55:55.548047 kubelet[2431]: I1104 23:55:55.548011 2431 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 23:55:55.548580 kubelet[2431]: I1104 23:55:55.548038 2431 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 23:55:55.549658 kubelet[2431]: I1104 23:55:55.549623 2431 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 23:55:55.550815 kubelet[2431]: E1104 23:55:55.550778 2431 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 23:55:55.550892 kubelet[2431]: E1104 23:55:55.550846 2431 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4487-0-0-n-1c2c5ddea4\" not found" Nov 4 23:55:55.631899 systemd[1]: Created slice kubepods-burstable-pod98993ea9c31e60c04e2d2abbbc02869d.slice - libcontainer container kubepods-burstable-pod98993ea9c31e60c04e2d2abbbc02869d.slice. Nov 4 23:55:55.651069 kubelet[2431]: I1104 23:55:55.651013 2431 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:55:55.652069 kubelet[2431]: E1104 23:55:55.652035 2431 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.221.150:6443/api/v1/nodes\": dial tcp 46.62.221.150:6443: connect: connection refused" node="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:55:55.652798 kubelet[2431]: E1104 23:55:55.652770 2431 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487-0-0-n-1c2c5ddea4\" not found" node="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:55:55.658318 systemd[1]: Created slice kubepods-burstable-pod2698b9f334a6968c863704ffb41c45f1.slice - libcontainer container kubepods-burstable-pod2698b9f334a6968c863704ffb41c45f1.slice. Nov 4 23:55:55.661711 kubelet[2431]: E1104 23:55:55.661663 2431 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487-0-0-n-1c2c5ddea4\" not found" node="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:55:55.669957 kubelet[2431]: I1104 23:55:55.669925 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3353c6d44c9b738a1fd87138ba179673-kubeconfig\") pod \"kube-scheduler-ci-4487-0-0-n-1c2c5ddea4\" (UID: \"3353c6d44c9b738a1fd87138ba179673\") " pod="kube-system/kube-scheduler-ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:55:55.670053 kubelet[2431]: I1104 23:55:55.669973 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98993ea9c31e60c04e2d2abbbc02869d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487-0-0-n-1c2c5ddea4\" (UID: \"98993ea9c31e60c04e2d2abbbc02869d\") " pod="kube-system/kube-apiserver-ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:55:55.670053 kubelet[2431]: I1104 23:55:55.670005 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2698b9f334a6968c863704ffb41c45f1-ca-certs\") pod \"kube-controller-manager-ci-4487-0-0-n-1c2c5ddea4\" (UID: \"2698b9f334a6968c863704ffb41c45f1\") " pod="kube-system/kube-controller-manager-ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:55:55.670053 kubelet[2431]: I1104 23:55:55.670027 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2698b9f334a6968c863704ffb41c45f1-flexvolume-dir\") pod \"kube-controller-manager-ci-4487-0-0-n-1c2c5ddea4\" (UID: \"2698b9f334a6968c863704ffb41c45f1\") " pod="kube-system/kube-controller-manager-ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:55:55.670053 kubelet[2431]: I1104 23:55:55.670051 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98993ea9c31e60c04e2d2abbbc02869d-ca-certs\") pod \"kube-apiserver-ci-4487-0-0-n-1c2c5ddea4\" (UID: \"98993ea9c31e60c04e2d2abbbc02869d\") " pod="kube-system/kube-apiserver-ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:55:55.670295 kubelet[2431]: I1104 23:55:55.670072 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98993ea9c31e60c04e2d2abbbc02869d-k8s-certs\") pod \"kube-apiserver-ci-4487-0-0-n-1c2c5ddea4\" (UID: \"98993ea9c31e60c04e2d2abbbc02869d\") " pod="kube-system/kube-apiserver-ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:55:55.670295 kubelet[2431]: I1104 23:55:55.670095 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2698b9f334a6968c863704ffb41c45f1-k8s-certs\") pod \"kube-controller-manager-ci-4487-0-0-n-1c2c5ddea4\" (UID: \"2698b9f334a6968c863704ffb41c45f1\") " pod="kube-system/kube-controller-manager-ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:55:55.670295 kubelet[2431]: I1104 23:55:55.670120 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2698b9f334a6968c863704ffb41c45f1-kubeconfig\") pod \"kube-controller-manager-ci-4487-0-0-n-1c2c5ddea4\" (UID: \"2698b9f334a6968c863704ffb41c45f1\") " pod="kube-system/kube-controller-manager-ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:55:55.670295 kubelet[2431]: I1104 23:55:55.670143 2431 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2698b9f334a6968c863704ffb41c45f1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487-0-0-n-1c2c5ddea4\" (UID: \"2698b9f334a6968c863704ffb41c45f1\") " pod="kube-system/kube-controller-manager-ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:55:55.673638 kubelet[2431]: E1104 23:55:55.673542 2431 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.221.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487-0-0-n-1c2c5ddea4?timeout=10s\": dial tcp 46.62.221.150:6443: connect: connection refused" interval="400ms" Nov 4 23:55:55.679057 systemd[1]: Created slice kubepods-burstable-pod3353c6d44c9b738a1fd87138ba179673.slice - libcontainer container kubepods-burstable-pod3353c6d44c9b738a1fd87138ba179673.slice. Nov 4 23:55:55.683136 kubelet[2431]: E1104 23:55:55.683090 2431 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487-0-0-n-1c2c5ddea4\" not found" node="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:55:55.856802 kubelet[2431]: I1104 23:55:55.855713 2431 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:55:55.857099 kubelet[2431]: E1104 23:55:55.857061 2431 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.221.150:6443/api/v1/nodes\": dial tcp 46.62.221.150:6443: connect: connection refused" node="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:55:55.955102 containerd[1627]: time="2025-11-04T23:55:55.954976477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487-0-0-n-1c2c5ddea4,Uid:98993ea9c31e60c04e2d2abbbc02869d,Namespace:kube-system,Attempt:0,}" Nov 4 23:55:55.963506 containerd[1627]: time="2025-11-04T23:55:55.963323561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487-0-0-n-1c2c5ddea4,Uid:2698b9f334a6968c863704ffb41c45f1,Namespace:kube-system,Attempt:0,}" Nov 4 23:55:55.986193 containerd[1627]: time="2025-11-04T23:55:55.986098726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487-0-0-n-1c2c5ddea4,Uid:3353c6d44c9b738a1fd87138ba179673,Namespace:kube-system,Attempt:0,}" Nov 4 23:55:56.075103 kubelet[2431]: E1104 23:55:56.075009 2431 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.221.150:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487-0-0-n-1c2c5ddea4?timeout=10s\": dial tcp 46.62.221.150:6443: connect: connection refused" interval="800ms" Nov 4 23:55:56.095727 containerd[1627]: time="2025-11-04T23:55:56.095661812Z" level=info msg="connecting to shim 9921c76d80e044faf0712579c398416e24450efa75eb860aa3804d3d2b349538" address="unix:///run/containerd/s/dfc9669b67541338778ec3f891562ec51c73c7d6af89d2883117ed8b7775cc6d" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:55:56.097122 containerd[1627]: time="2025-11-04T23:55:56.097088010Z" level=info msg="connecting to shim 291564ed6bd6f3b68f38fc8ad0fe04fe7e2f2fa61ac9c70a8ef167e9f5600f4b" address="unix:///run/containerd/s/1b68a94eb4d4d62448a2d1cf151a305370ea78d61cd7ea1854a677288883a864" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:55:56.105609 containerd[1627]: time="2025-11-04T23:55:56.105570715Z" level=info msg="connecting to shim 29cfc3c76fee2eab513f09ec23115c398bf7e6e4bd765d1ef680c3af92ffa2a8" address="unix:///run/containerd/s/56b62beeb1134db667e49c0658bc07680cf1147a4d625358468fb16811ea926d" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:55:56.191068 systemd[1]: Started cri-containerd-291564ed6bd6f3b68f38fc8ad0fe04fe7e2f2fa61ac9c70a8ef167e9f5600f4b.scope - libcontainer container 291564ed6bd6f3b68f38fc8ad0fe04fe7e2f2fa61ac9c70a8ef167e9f5600f4b. Nov 4 23:55:56.191872 systemd[1]: Started cri-containerd-29cfc3c76fee2eab513f09ec23115c398bf7e6e4bd765d1ef680c3af92ffa2a8.scope - libcontainer container 29cfc3c76fee2eab513f09ec23115c398bf7e6e4bd765d1ef680c3af92ffa2a8. Nov 4 23:55:56.193842 systemd[1]: Started cri-containerd-9921c76d80e044faf0712579c398416e24450efa75eb860aa3804d3d2b349538.scope - libcontainer container 9921c76d80e044faf0712579c398416e24450efa75eb860aa3804d3d2b349538. Nov 4 23:55:56.238731 kubelet[2431]: E1104 23:55:56.238696 2431 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://46.62.221.150:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487-0-0-n-1c2c5ddea4&limit=500&resourceVersion=0\": dial tcp 46.62.221.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 23:55:56.261981 kubelet[2431]: I1104 23:55:56.261950 2431 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:55:56.263241 kubelet[2431]: E1104 23:55:56.263209 2431 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.221.150:6443/api/v1/nodes\": dial tcp 46.62.221.150:6443: connect: connection refused" node="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:55:56.263763 containerd[1627]: time="2025-11-04T23:55:56.263738109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487-0-0-n-1c2c5ddea4,Uid:3353c6d44c9b738a1fd87138ba179673,Namespace:kube-system,Attempt:0,} returns sandbox id \"291564ed6bd6f3b68f38fc8ad0fe04fe7e2f2fa61ac9c70a8ef167e9f5600f4b\"" Nov 4 23:55:56.267737 containerd[1627]: time="2025-11-04T23:55:56.267695746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487-0-0-n-1c2c5ddea4,Uid:98993ea9c31e60c04e2d2abbbc02869d,Namespace:kube-system,Attempt:0,} returns sandbox id \"29cfc3c76fee2eab513f09ec23115c398bf7e6e4bd765d1ef680c3af92ffa2a8\"" Nov 4 23:55:56.271655 containerd[1627]: time="2025-11-04T23:55:56.271620137Z" level=info msg="CreateContainer within sandbox \"291564ed6bd6f3b68f38fc8ad0fe04fe7e2f2fa61ac9c70a8ef167e9f5600f4b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 4 23:55:56.275181 containerd[1627]: time="2025-11-04T23:55:56.275149728Z" level=info msg="CreateContainer within sandbox \"29cfc3c76fee2eab513f09ec23115c398bf7e6e4bd765d1ef680c3af92ffa2a8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 4 23:55:56.289020 containerd[1627]: time="2025-11-04T23:55:56.288993960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487-0-0-n-1c2c5ddea4,Uid:2698b9f334a6968c863704ffb41c45f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"9921c76d80e044faf0712579c398416e24450efa75eb860aa3804d3d2b349538\"" Nov 4 23:55:56.290315 containerd[1627]: time="2025-11-04T23:55:56.290289188Z" level=info msg="Container 1aecbe773e27feb544ff19977b4afbd133bc25ef73d6244058b42f3b3fee2be3: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:55:56.291164 containerd[1627]: time="2025-11-04T23:55:56.291062627Z" level=info msg="Container 588c31020988b844d4a6c40ac4742b620bec146bf3b320074b6e431f59c23e48: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:55:56.296008 containerd[1627]: time="2025-11-04T23:55:56.295983539Z" level=info msg="CreateContainer within sandbox \"9921c76d80e044faf0712579c398416e24450efa75eb860aa3804d3d2b349538\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 4 23:55:56.305268 containerd[1627]: time="2025-11-04T23:55:56.305247230Z" level=info msg="CreateContainer within sandbox \"29cfc3c76fee2eab513f09ec23115c398bf7e6e4bd765d1ef680c3af92ffa2a8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"588c31020988b844d4a6c40ac4742b620bec146bf3b320074b6e431f59c23e48\"" Nov 4 23:55:56.306106 containerd[1627]: time="2025-11-04T23:55:56.306081925Z" level=info msg="StartContainer for \"588c31020988b844d4a6c40ac4742b620bec146bf3b320074b6e431f59c23e48\"" Nov 4 23:55:56.306971 containerd[1627]: time="2025-11-04T23:55:56.306943493Z" level=info msg="connecting to shim 588c31020988b844d4a6c40ac4742b620bec146bf3b320074b6e431f59c23e48" address="unix:///run/containerd/s/56b62beeb1134db667e49c0658bc07680cf1147a4d625358468fb16811ea926d" protocol=ttrpc version=3 Nov 4 23:55:56.307414 containerd[1627]: time="2025-11-04T23:55:56.307398500Z" level=info msg="Container f4fd2d9c8c8fb80c492f3ddbf24f1d795357d25ebe64086c9a096dd89536df13: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:55:56.310140 containerd[1627]: time="2025-11-04T23:55:56.310083786Z" level=info msg="CreateContainer within sandbox \"291564ed6bd6f3b68f38fc8ad0fe04fe7e2f2fa61ac9c70a8ef167e9f5600f4b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1aecbe773e27feb544ff19977b4afbd133bc25ef73d6244058b42f3b3fee2be3\"" Nov 4 23:55:56.310426 containerd[1627]: time="2025-11-04T23:55:56.310402880Z" level=info msg="StartContainer for \"1aecbe773e27feb544ff19977b4afbd133bc25ef73d6244058b42f3b3fee2be3\"" Nov 4 23:55:56.311992 containerd[1627]: time="2025-11-04T23:55:56.311947407Z" level=info msg="connecting to shim 1aecbe773e27feb544ff19977b4afbd133bc25ef73d6244058b42f3b3fee2be3" address="unix:///run/containerd/s/1b68a94eb4d4d62448a2d1cf151a305370ea78d61cd7ea1854a677288883a864" protocol=ttrpc version=3 Nov 4 23:55:56.314026 containerd[1627]: time="2025-11-04T23:55:56.313987863Z" level=info msg="CreateContainer within sandbox \"9921c76d80e044faf0712579c398416e24450efa75eb860aa3804d3d2b349538\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f4fd2d9c8c8fb80c492f3ddbf24f1d795357d25ebe64086c9a096dd89536df13\"" Nov 4 23:55:56.314349 containerd[1627]: time="2025-11-04T23:55:56.314313911Z" level=info msg="StartContainer for \"f4fd2d9c8c8fb80c492f3ddbf24f1d795357d25ebe64086c9a096dd89536df13\"" Nov 4 23:55:56.315489 containerd[1627]: time="2025-11-04T23:55:56.315320688Z" level=info msg="connecting to shim f4fd2d9c8c8fb80c492f3ddbf24f1d795357d25ebe64086c9a096dd89536df13" address="unix:///run/containerd/s/dfc9669b67541338778ec3f891562ec51c73c7d6af89d2883117ed8b7775cc6d" protocol=ttrpc version=3 Nov 4 23:55:56.329599 systemd[1]: Started cri-containerd-588c31020988b844d4a6c40ac4742b620bec146bf3b320074b6e431f59c23e48.scope - libcontainer container 588c31020988b844d4a6c40ac4742b620bec146bf3b320074b6e431f59c23e48. Nov 4 23:55:56.337632 systemd[1]: Started cri-containerd-1aecbe773e27feb544ff19977b4afbd133bc25ef73d6244058b42f3b3fee2be3.scope - libcontainer container 1aecbe773e27feb544ff19977b4afbd133bc25ef73d6244058b42f3b3fee2be3. Nov 4 23:55:56.348669 systemd[1]: Started cri-containerd-f4fd2d9c8c8fb80c492f3ddbf24f1d795357d25ebe64086c9a096dd89536df13.scope - libcontainer container f4fd2d9c8c8fb80c492f3ddbf24f1d795357d25ebe64086c9a096dd89536df13. Nov 4 23:55:56.407206 containerd[1627]: time="2025-11-04T23:55:56.407171813Z" level=info msg="StartContainer for \"588c31020988b844d4a6c40ac4742b620bec146bf3b320074b6e431f59c23e48\" returns successfully" Nov 4 23:55:56.418266 containerd[1627]: time="2025-11-04T23:55:56.418235135Z" level=info msg="StartContainer for \"1aecbe773e27feb544ff19977b4afbd133bc25ef73d6244058b42f3b3fee2be3\" returns successfully" Nov 4 23:55:56.425398 containerd[1627]: time="2025-11-04T23:55:56.425374037Z" level=info msg="StartContainer for \"f4fd2d9c8c8fb80c492f3ddbf24f1d795357d25ebe64086c9a096dd89536df13\" returns successfully" Nov 4 23:55:56.509740 kubelet[2431]: E1104 23:55:56.509647 2431 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://46.62.221.150:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 46.62.221.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 23:55:56.523287 kubelet[2431]: E1104 23:55:56.523268 2431 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487-0-0-n-1c2c5ddea4\" not found" node="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:55:56.526386 kubelet[2431]: E1104 23:55:56.526368 2431 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487-0-0-n-1c2c5ddea4\" not found" node="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:55:56.527760 kubelet[2431]: E1104 23:55:56.527740 2431 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487-0-0-n-1c2c5ddea4\" not found" node="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:55:56.553380 kubelet[2431]: E1104 23:55:56.553350 2431 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://46.62.221.150:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 46.62.221.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 23:55:56.582121 kubelet[2431]: E1104 23:55:56.582096 2431 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://46.62.221.150:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.62.221.150:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 23:55:57.066673 kubelet[2431]: I1104 23:55:57.066622 2431 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:55:57.532213 kubelet[2431]: E1104 23:55:57.532168 2431 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487-0-0-n-1c2c5ddea4\" not found" node="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:55:57.532502 kubelet[2431]: E1104 23:55:57.532413 2431 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487-0-0-n-1c2c5ddea4\" not found" node="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:55:58.034074 kubelet[2431]: E1104 23:55:58.034046 2431 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4487-0-0-n-1c2c5ddea4\" not found" node="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:55:58.102519 kubelet[2431]: I1104 23:55:58.102144 2431 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:55:58.102519 kubelet[2431]: E1104 23:55:58.102177 2431 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4487-0-0-n-1c2c5ddea4\": node \"ci-4487-0-0-n-1c2c5ddea4\" not found" Nov 4 23:55:58.115585 kubelet[2431]: E1104 23:55:58.115516 2431 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487-0-0-n-1c2c5ddea4\" not found" Nov 4 23:55:58.216314 kubelet[2431]: E1104 23:55:58.216235 2431 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487-0-0-n-1c2c5ddea4\" not found" Nov 4 23:55:58.316628 kubelet[2431]: E1104 23:55:58.316402 2431 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487-0-0-n-1c2c5ddea4\" not found" Nov 4 23:55:58.416962 kubelet[2431]: E1104 23:55:58.416894 2431 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487-0-0-n-1c2c5ddea4\" not found" Nov 4 23:55:58.517344 kubelet[2431]: E1104 23:55:58.517267 2431 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487-0-0-n-1c2c5ddea4\" not found" Nov 4 23:55:58.617801 kubelet[2431]: E1104 23:55:58.617652 2431 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487-0-0-n-1c2c5ddea4\" not found" Nov 4 23:55:58.718811 kubelet[2431]: E1104 23:55:58.718743 2431 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487-0-0-n-1c2c5ddea4\" not found" Nov 4 23:55:58.819348 kubelet[2431]: E1104 23:55:58.819237 2431 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487-0-0-n-1c2c5ddea4\" not found" Nov 4 23:55:58.920045 kubelet[2431]: E1104 23:55:58.919836 2431 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487-0-0-n-1c2c5ddea4\" not found" Nov 4 23:55:59.020635 kubelet[2431]: E1104 23:55:59.020580 2431 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487-0-0-n-1c2c5ddea4\" not found" Nov 4 23:55:59.040555 update_engine[1608]: I20251104 23:55:59.039799 1608 update_attempter.cc:509] Updating boot flags... Nov 4 23:55:59.123343 kubelet[2431]: E1104 23:55:59.121625 2431 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487-0-0-n-1c2c5ddea4\" not found" Nov 4 23:55:59.222666 kubelet[2431]: E1104 23:55:59.222541 2431 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487-0-0-n-1c2c5ddea4\" not found" Nov 4 23:55:59.323462 kubelet[2431]: E1104 23:55:59.323438 2431 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487-0-0-n-1c2c5ddea4\" not found" Nov 4 23:55:59.427208 kubelet[2431]: I1104 23:55:59.427168 2431 apiserver.go:52] "Watching apiserver" Nov 4 23:55:59.468305 kubelet[2431]: I1104 23:55:59.468249 2431 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 4 23:55:59.472895 kubelet[2431]: I1104 23:55:59.472757 2431 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:55:59.491063 kubelet[2431]: I1104 23:55:59.491027 2431 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:55:59.498080 kubelet[2431]: I1104 23:55:59.497949 2431 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:00.409338 systemd[1]: Reload requested from client PID 2730 ('systemctl') (unit session-7.scope)... Nov 4 23:56:00.409368 systemd[1]: Reloading... Nov 4 23:56:00.485576 zram_generator::config[2771]: No configuration found. Nov 4 23:56:00.680352 systemd[1]: Reloading finished in 270 ms. Nov 4 23:56:00.700056 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:56:00.721426 systemd[1]: kubelet.service: Deactivated successfully. Nov 4 23:56:00.721635 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:56:00.721682 systemd[1]: kubelet.service: Consumed 920ms CPU time, 128.6M memory peak. Nov 4 23:56:00.723440 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 23:56:00.833550 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 23:56:00.840727 (kubelet)[2826]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 23:56:00.890251 kubelet[2826]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:56:00.891512 kubelet[2826]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 23:56:00.891512 kubelet[2826]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 23:56:00.891512 kubelet[2826]: I1104 23:56:00.890626 2826 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 23:56:00.898409 kubelet[2826]: I1104 23:56:00.898369 2826 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 4 23:56:00.898409 kubelet[2826]: I1104 23:56:00.898389 2826 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 23:56:00.898598 kubelet[2826]: I1104 23:56:00.898574 2826 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 23:56:00.900173 kubelet[2826]: I1104 23:56:00.900148 2826 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 4 23:56:00.902373 kubelet[2826]: I1104 23:56:00.902200 2826 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 23:56:00.908531 kubelet[2826]: I1104 23:56:00.908521 2826 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 23:56:00.911039 kubelet[2826]: I1104 23:56:00.911029 2826 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 4 23:56:00.911227 kubelet[2826]: I1104 23:56:00.911212 2826 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 23:56:00.911412 kubelet[2826]: I1104 23:56:00.911287 2826 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487-0-0-n-1c2c5ddea4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 23:56:00.911560 kubelet[2826]: I1104 23:56:00.911552 2826 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 23:56:00.911626 kubelet[2826]: I1104 23:56:00.911620 2826 container_manager_linux.go:303] "Creating device plugin manager" Nov 4 23:56:00.912765 kubelet[2826]: I1104 23:56:00.912710 2826 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:56:00.913820 kubelet[2826]: I1104 23:56:00.913761 2826 kubelet.go:480] "Attempting to sync node with API server" Nov 4 23:56:00.913820 kubelet[2826]: I1104 23:56:00.913773 2826 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 23:56:00.913820 kubelet[2826]: I1104 23:56:00.913790 2826 kubelet.go:386] "Adding apiserver pod source" Nov 4 23:56:00.913965 kubelet[2826]: I1104 23:56:00.913955 2826 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 23:56:00.918629 kubelet[2826]: I1104 23:56:00.918599 2826 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 4 23:56:00.919221 kubelet[2826]: I1104 23:56:00.919135 2826 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 23:56:00.932494 kubelet[2826]: I1104 23:56:00.932358 2826 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 4 23:56:00.933263 kubelet[2826]: I1104 23:56:00.932969 2826 server.go:1289] "Started kubelet" Nov 4 23:56:00.933913 kubelet[2826]: I1104 23:56:00.933839 2826 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 23:56:00.934327 kubelet[2826]: I1104 23:56:00.934144 2826 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 23:56:00.934571 kubelet[2826]: I1104 23:56:00.934424 2826 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 23:56:00.935905 kubelet[2826]: I1104 23:56:00.935395 2826 server.go:317] "Adding debug handlers to kubelet server" Nov 4 23:56:00.936546 kubelet[2826]: I1104 23:56:00.936439 2826 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 23:56:00.938728 kubelet[2826]: I1104 23:56:00.938704 2826 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 23:56:00.940501 kubelet[2826]: I1104 23:56:00.940228 2826 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 4 23:56:00.940501 kubelet[2826]: E1104 23:56:00.940413 2826 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487-0-0-n-1c2c5ddea4\" not found" Nov 4 23:56:00.941668 kubelet[2826]: I1104 23:56:00.941645 2826 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 4 23:56:00.941749 kubelet[2826]: I1104 23:56:00.941736 2826 reconciler.go:26] "Reconciler: start to sync state" Nov 4 23:56:00.946823 kubelet[2826]: I1104 23:56:00.946814 2826 factory.go:223] Registration of the systemd container factory successfully Nov 4 23:56:00.947107 kubelet[2826]: I1104 23:56:00.947056 2826 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 23:56:00.949009 kubelet[2826]: E1104 23:56:00.948986 2826 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 23:56:00.950032 kubelet[2826]: I1104 23:56:00.950005 2826 factory.go:223] Registration of the containerd container factory successfully Nov 4 23:56:00.958895 kubelet[2826]: I1104 23:56:00.958845 2826 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 4 23:56:00.962821 kubelet[2826]: I1104 23:56:00.962807 2826 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 4 23:56:00.963219 kubelet[2826]: I1104 23:56:00.963210 2826 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 4 23:56:00.963380 kubelet[2826]: I1104 23:56:00.963299 2826 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 23:56:00.963380 kubelet[2826]: I1104 23:56:00.963308 2826 kubelet.go:2436] "Starting kubelet main sync loop" Nov 4 23:56:00.964596 kubelet[2826]: E1104 23:56:00.963338 2826 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 23:56:01.004436 kubelet[2826]: I1104 23:56:01.004360 2826 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 23:56:01.004436 kubelet[2826]: I1104 23:56:01.004410 2826 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 23:56:01.004887 kubelet[2826]: I1104 23:56:01.004613 2826 state_mem.go:36] "Initialized new in-memory state store" Nov 4 23:56:01.004887 kubelet[2826]: I1104 23:56:01.004714 2826 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 4 23:56:01.004887 kubelet[2826]: I1104 23:56:01.004722 2826 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 4 23:56:01.004887 kubelet[2826]: I1104 23:56:01.004751 2826 policy_none.go:49] "None policy: Start" Nov 4 23:56:01.004887 kubelet[2826]: I1104 23:56:01.004759 2826 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 4 23:56:01.004887 kubelet[2826]: I1104 23:56:01.004767 2826 state_mem.go:35] "Initializing new in-memory state store" Nov 4 23:56:01.004887 kubelet[2826]: I1104 23:56:01.004842 2826 state_mem.go:75] "Updated machine memory state" Nov 4 23:56:01.008697 kubelet[2826]: E1104 23:56:01.008231 2826 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 23:56:01.008697 kubelet[2826]: I1104 23:56:01.008368 2826 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 23:56:01.008697 kubelet[2826]: I1104 23:56:01.008380 2826 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 23:56:01.012551 kubelet[2826]: I1104 23:56:01.011607 2826 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 23:56:01.013590 kubelet[2826]: E1104 23:56:01.013576 2826 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 23:56:01.065699 kubelet[2826]: I1104 23:56:01.065668 2826 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:01.066087 kubelet[2826]: I1104 23:56:01.066031 2826 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:01.066087 kubelet[2826]: I1104 23:56:01.066074 2826 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:01.073092 kubelet[2826]: E1104 23:56:01.073070 2826 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487-0-0-n-1c2c5ddea4\" already exists" pod="kube-system/kube-apiserver-ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:01.073730 kubelet[2826]: E1104 23:56:01.073684 2826 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4487-0-0-n-1c2c5ddea4\" already exists" pod="kube-system/kube-controller-manager-ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:01.073938 kubelet[2826]: E1104 23:56:01.073812 2826 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4487-0-0-n-1c2c5ddea4\" already exists" pod="kube-system/kube-scheduler-ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:01.111730 kubelet[2826]: I1104 23:56:01.111415 2826 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:01.120730 kubelet[2826]: I1104 23:56:01.120695 2826 kubelet_node_status.go:124] "Node was previously registered" node="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:01.120803 kubelet[2826]: I1104 23:56:01.120791 2826 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:01.243446 kubelet[2826]: I1104 23:56:01.243301 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2698b9f334a6968c863704ffb41c45f1-kubeconfig\") pod \"kube-controller-manager-ci-4487-0-0-n-1c2c5ddea4\" (UID: \"2698b9f334a6968c863704ffb41c45f1\") " pod="kube-system/kube-controller-manager-ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:01.243446 kubelet[2826]: I1104 23:56:01.243358 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2698b9f334a6968c863704ffb41c45f1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487-0-0-n-1c2c5ddea4\" (UID: \"2698b9f334a6968c863704ffb41c45f1\") " pod="kube-system/kube-controller-manager-ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:01.243446 kubelet[2826]: I1104 23:56:01.243402 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98993ea9c31e60c04e2d2abbbc02869d-ca-certs\") pod \"kube-apiserver-ci-4487-0-0-n-1c2c5ddea4\" (UID: \"98993ea9c31e60c04e2d2abbbc02869d\") " pod="kube-system/kube-apiserver-ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:01.243994 kubelet[2826]: I1104 23:56:01.243924 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98993ea9c31e60c04e2d2abbbc02869d-k8s-certs\") pod \"kube-apiserver-ci-4487-0-0-n-1c2c5ddea4\" (UID: \"98993ea9c31e60c04e2d2abbbc02869d\") " pod="kube-system/kube-apiserver-ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:01.245282 kubelet[2826]: I1104 23:56:01.245184 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98993ea9c31e60c04e2d2abbbc02869d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487-0-0-n-1c2c5ddea4\" (UID: \"98993ea9c31e60c04e2d2abbbc02869d\") " pod="kube-system/kube-apiserver-ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:01.245282 kubelet[2826]: I1104 23:56:01.245246 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2698b9f334a6968c863704ffb41c45f1-flexvolume-dir\") pod \"kube-controller-manager-ci-4487-0-0-n-1c2c5ddea4\" (UID: \"2698b9f334a6968c863704ffb41c45f1\") " pod="kube-system/kube-controller-manager-ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:01.245282 kubelet[2826]: I1104 23:56:01.245271 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2698b9f334a6968c863704ffb41c45f1-k8s-certs\") pod \"kube-controller-manager-ci-4487-0-0-n-1c2c5ddea4\" (UID: \"2698b9f334a6968c863704ffb41c45f1\") " pod="kube-system/kube-controller-manager-ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:01.245531 kubelet[2826]: I1104 23:56:01.245297 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3353c6d44c9b738a1fd87138ba179673-kubeconfig\") pod \"kube-scheduler-ci-4487-0-0-n-1c2c5ddea4\" (UID: \"3353c6d44c9b738a1fd87138ba179673\") " pod="kube-system/kube-scheduler-ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:01.245531 kubelet[2826]: I1104 23:56:01.245320 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2698b9f334a6968c863704ffb41c45f1-ca-certs\") pod \"kube-controller-manager-ci-4487-0-0-n-1c2c5ddea4\" (UID: \"2698b9f334a6968c863704ffb41c45f1\") " pod="kube-system/kube-controller-manager-ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:01.918612 kubelet[2826]: I1104 23:56:01.917508 2826 apiserver.go:52] "Watching apiserver" Nov 4 23:56:01.943889 kubelet[2826]: I1104 23:56:01.943017 2826 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 4 23:56:01.984617 kubelet[2826]: I1104 23:56:01.983971 2826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4487-0-0-n-1c2c5ddea4" podStartSLOduration=2.9839573169999998 podStartE2EDuration="2.983957317s" podCreationTimestamp="2025-11-04 23:55:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:56:01.983289144 +0000 UTC m=+1.134028268" watchObservedRunningTime="2025-11-04 23:56:01.983957317 +0000 UTC m=+1.134696450" Nov 4 23:56:01.985818 kubelet[2826]: I1104 23:56:01.985306 2826 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:01.994772 kubelet[2826]: E1104 23:56:01.993891 2826 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487-0-0-n-1c2c5ddea4\" already exists" pod="kube-system/kube-apiserver-ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:02.004856 kubelet[2826]: I1104 23:56:02.004814 2826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4487-0-0-n-1c2c5ddea4" podStartSLOduration=3.004800196 podStartE2EDuration="3.004800196s" podCreationTimestamp="2025-11-04 23:55:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:56:01.994321266 +0000 UTC m=+1.145060391" watchObservedRunningTime="2025-11-04 23:56:02.004800196 +0000 UTC m=+1.155539320" Nov 4 23:56:02.017042 kubelet[2826]: I1104 23:56:02.015614 2826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4487-0-0-n-1c2c5ddea4" podStartSLOduration=3.015604873 podStartE2EDuration="3.015604873s" podCreationTimestamp="2025-11-04 23:55:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:56:02.004979063 +0000 UTC m=+1.155718177" watchObservedRunningTime="2025-11-04 23:56:02.015604873 +0000 UTC m=+1.166343987" Nov 4 23:56:04.988940 kubelet[2826]: I1104 23:56:04.988845 2826 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 4 23:56:04.989753 containerd[1627]: time="2025-11-04T23:56:04.989658031Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 4 23:56:04.991200 kubelet[2826]: I1104 23:56:04.991131 2826 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 4 23:56:05.966662 systemd[1]: Created slice kubepods-besteffort-pod70505e89_bb8a_45a6_bd3f_33d9bc97d5df.slice - libcontainer container kubepods-besteffort-pod70505e89_bb8a_45a6_bd3f_33d9bc97d5df.slice. Nov 4 23:56:05.986507 kubelet[2826]: I1104 23:56:05.986430 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/70505e89-bb8a-45a6-bd3f-33d9bc97d5df-kube-proxy\") pod \"kube-proxy-ggdq4\" (UID: \"70505e89-bb8a-45a6-bd3f-33d9bc97d5df\") " pod="kube-system/kube-proxy-ggdq4" Nov 4 23:56:05.986698 kubelet[2826]: I1104 23:56:05.986679 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70505e89-bb8a-45a6-bd3f-33d9bc97d5df-xtables-lock\") pod \"kube-proxy-ggdq4\" (UID: \"70505e89-bb8a-45a6-bd3f-33d9bc97d5df\") " pod="kube-system/kube-proxy-ggdq4" Nov 4 23:56:05.986877 kubelet[2826]: I1104 23:56:05.986861 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70505e89-bb8a-45a6-bd3f-33d9bc97d5df-lib-modules\") pod \"kube-proxy-ggdq4\" (UID: \"70505e89-bb8a-45a6-bd3f-33d9bc97d5df\") " pod="kube-system/kube-proxy-ggdq4" Nov 4 23:56:05.987094 kubelet[2826]: I1104 23:56:05.987049 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgmr4\" (UniqueName: \"kubernetes.io/projected/70505e89-bb8a-45a6-bd3f-33d9bc97d5df-kube-api-access-fgmr4\") pod \"kube-proxy-ggdq4\" (UID: \"70505e89-bb8a-45a6-bd3f-33d9bc97d5df\") " pod="kube-system/kube-proxy-ggdq4" Nov 4 23:56:06.189164 kubelet[2826]: I1104 23:56:06.189096 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hs7m\" (UniqueName: \"kubernetes.io/projected/8338c2db-6760-48ed-bc57-8c218d1b3894-kube-api-access-5hs7m\") pod \"tigera-operator-7dcd859c48-smz9k\" (UID: \"8338c2db-6760-48ed-bc57-8c218d1b3894\") " pod="tigera-operator/tigera-operator-7dcd859c48-smz9k" Nov 4 23:56:06.189164 kubelet[2826]: I1104 23:56:06.189153 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8338c2db-6760-48ed-bc57-8c218d1b3894-var-lib-calico\") pod \"tigera-operator-7dcd859c48-smz9k\" (UID: \"8338c2db-6760-48ed-bc57-8c218d1b3894\") " pod="tigera-operator/tigera-operator-7dcd859c48-smz9k" Nov 4 23:56:06.196363 systemd[1]: Created slice kubepods-besteffort-pod8338c2db_6760_48ed_bc57_8c218d1b3894.slice - libcontainer container kubepods-besteffort-pod8338c2db_6760_48ed_bc57_8c218d1b3894.slice. Nov 4 23:56:06.276089 containerd[1627]: time="2025-11-04T23:56:06.276012325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ggdq4,Uid:70505e89-bb8a-45a6-bd3f-33d9bc97d5df,Namespace:kube-system,Attempt:0,}" Nov 4 23:56:06.302352 containerd[1627]: time="2025-11-04T23:56:06.302265543Z" level=info msg="connecting to shim 0483de3d419c0072a634364c8355a4b92388345de75489f0274890532412dac6" address="unix:///run/containerd/s/0199257452e34a42cf57a324c140a9724f9d74d5950bc3c1b2c64a69dcb1e8d2" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:06.341753 systemd[1]: Started cri-containerd-0483de3d419c0072a634364c8355a4b92388345de75489f0274890532412dac6.scope - libcontainer container 0483de3d419c0072a634364c8355a4b92388345de75489f0274890532412dac6. Nov 4 23:56:06.382146 containerd[1627]: time="2025-11-04T23:56:06.382086385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ggdq4,Uid:70505e89-bb8a-45a6-bd3f-33d9bc97d5df,Namespace:kube-system,Attempt:0,} returns sandbox id \"0483de3d419c0072a634364c8355a4b92388345de75489f0274890532412dac6\"" Nov 4 23:56:06.390057 containerd[1627]: time="2025-11-04T23:56:06.390010767Z" level=info msg="CreateContainer within sandbox \"0483de3d419c0072a634364c8355a4b92388345de75489f0274890532412dac6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 4 23:56:06.403499 containerd[1627]: time="2025-11-04T23:56:06.402599498Z" level=info msg="Container f47c44bea825b989f4133f07f69937e604f910d56d80401fb6165ea0bf5fe9d4: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:56:06.411876 containerd[1627]: time="2025-11-04T23:56:06.411851871Z" level=info msg="CreateContainer within sandbox \"0483de3d419c0072a634364c8355a4b92388345de75489f0274890532412dac6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f47c44bea825b989f4133f07f69937e604f910d56d80401fb6165ea0bf5fe9d4\"" Nov 4 23:56:06.413867 containerd[1627]: time="2025-11-04T23:56:06.413748483Z" level=info msg="StartContainer for \"f47c44bea825b989f4133f07f69937e604f910d56d80401fb6165ea0bf5fe9d4\"" Nov 4 23:56:06.417735 containerd[1627]: time="2025-11-04T23:56:06.417714039Z" level=info msg="connecting to shim f47c44bea825b989f4133f07f69937e604f910d56d80401fb6165ea0bf5fe9d4" address="unix:///run/containerd/s/0199257452e34a42cf57a324c140a9724f9d74d5950bc3c1b2c64a69dcb1e8d2" protocol=ttrpc version=3 Nov 4 23:56:06.446744 systemd[1]: Started cri-containerd-f47c44bea825b989f4133f07f69937e604f910d56d80401fb6165ea0bf5fe9d4.scope - libcontainer container f47c44bea825b989f4133f07f69937e604f910d56d80401fb6165ea0bf5fe9d4. Nov 4 23:56:06.501970 containerd[1627]: time="2025-11-04T23:56:06.501924588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-smz9k,Uid:8338c2db-6760-48ed-bc57-8c218d1b3894,Namespace:tigera-operator,Attempt:0,}" Nov 4 23:56:06.528758 containerd[1627]: time="2025-11-04T23:56:06.528590790Z" level=info msg="StartContainer for \"f47c44bea825b989f4133f07f69937e604f910d56d80401fb6165ea0bf5fe9d4\" returns successfully" Nov 4 23:56:06.541554 containerd[1627]: time="2025-11-04T23:56:06.541505468Z" level=info msg="connecting to shim 194685c96b7086838326a1ac628375afc828c23054be04349bcf088c7bf8231b" address="unix:///run/containerd/s/1a39521528c4d869da6d4e0caa647cc814d0901a2afb8a5a9f354d644dbc99a1" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:06.570675 systemd[1]: Started cri-containerd-194685c96b7086838326a1ac628375afc828c23054be04349bcf088c7bf8231b.scope - libcontainer container 194685c96b7086838326a1ac628375afc828c23054be04349bcf088c7bf8231b. Nov 4 23:56:06.620603 containerd[1627]: time="2025-11-04T23:56:06.620565888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-smz9k,Uid:8338c2db-6760-48ed-bc57-8c218d1b3894,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"194685c96b7086838326a1ac628375afc828c23054be04349bcf088c7bf8231b\"" Nov 4 23:56:06.622667 containerd[1627]: time="2025-11-04T23:56:06.622607488Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 4 23:56:07.022035 kubelet[2826]: I1104 23:56:07.021883 2826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ggdq4" podStartSLOduration=2.021860671 podStartE2EDuration="2.021860671s" podCreationTimestamp="2025-11-04 23:56:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:56:07.021654267 +0000 UTC m=+6.172393411" watchObservedRunningTime="2025-11-04 23:56:07.021860671 +0000 UTC m=+6.172599815" Nov 4 23:56:07.100535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount856047251.mount: Deactivated successfully. Nov 4 23:56:08.973748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount907523117.mount: Deactivated successfully. Nov 4 23:56:09.415884 containerd[1627]: time="2025-11-04T23:56:09.415827366Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:09.416967 containerd[1627]: time="2025-11-04T23:56:09.416803945Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 4 23:56:09.417847 containerd[1627]: time="2025-11-04T23:56:09.417825661Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:09.419691 containerd[1627]: time="2025-11-04T23:56:09.419664709Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:09.420111 containerd[1627]: time="2025-11-04T23:56:09.420091516Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.797453898s" Nov 4 23:56:09.420176 containerd[1627]: time="2025-11-04T23:56:09.420164729Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 4 23:56:09.424544 containerd[1627]: time="2025-11-04T23:56:09.423977040Z" level=info msg="CreateContainer within sandbox \"194685c96b7086838326a1ac628375afc828c23054be04349bcf088c7bf8231b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 4 23:56:09.430489 containerd[1627]: time="2025-11-04T23:56:09.430455498Z" level=info msg="Container 43ad66d553e039ca8dd9602018e25475176b79bfc8eab9e5cd7dac68a54ce85b: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:56:09.447298 containerd[1627]: time="2025-11-04T23:56:09.447261899Z" level=info msg="CreateContainer within sandbox \"194685c96b7086838326a1ac628375afc828c23054be04349bcf088c7bf8231b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"43ad66d553e039ca8dd9602018e25475176b79bfc8eab9e5cd7dac68a54ce85b\"" Nov 4 23:56:09.448143 containerd[1627]: time="2025-11-04T23:56:09.448086193Z" level=info msg="StartContainer for \"43ad66d553e039ca8dd9602018e25475176b79bfc8eab9e5cd7dac68a54ce85b\"" Nov 4 23:56:09.450267 containerd[1627]: time="2025-11-04T23:56:09.450245258Z" level=info msg="connecting to shim 43ad66d553e039ca8dd9602018e25475176b79bfc8eab9e5cd7dac68a54ce85b" address="unix:///run/containerd/s/1a39521528c4d869da6d4e0caa647cc814d0901a2afb8a5a9f354d644dbc99a1" protocol=ttrpc version=3 Nov 4 23:56:09.472591 systemd[1]: Started cri-containerd-43ad66d553e039ca8dd9602018e25475176b79bfc8eab9e5cd7dac68a54ce85b.scope - libcontainer container 43ad66d553e039ca8dd9602018e25475176b79bfc8eab9e5cd7dac68a54ce85b. Nov 4 23:56:09.506923 containerd[1627]: time="2025-11-04T23:56:09.506880798Z" level=info msg="StartContainer for \"43ad66d553e039ca8dd9602018e25475176b79bfc8eab9e5cd7dac68a54ce85b\" returns successfully" Nov 4 23:56:13.870904 kubelet[2826]: I1104 23:56:13.870793 2826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-smz9k" podStartSLOduration=5.071097178 podStartE2EDuration="7.869824564s" podCreationTimestamp="2025-11-04 23:56:06 +0000 UTC" firstStartedPulling="2025-11-04 23:56:06.622004876 +0000 UTC m=+5.772743989" lastFinishedPulling="2025-11-04 23:56:09.420732261 +0000 UTC m=+8.571471375" observedRunningTime="2025-11-04 23:56:10.034104908 +0000 UTC m=+9.184844032" watchObservedRunningTime="2025-11-04 23:56:13.869824564 +0000 UTC m=+13.020563688" Nov 4 23:56:15.366288 sudo[1876]: pam_unix(sudo:session): session closed for user root Nov 4 23:56:15.528608 sshd[1875]: Connection closed by 147.75.109.163 port 44872 Nov 4 23:56:15.529307 sshd-session[1857]: pam_unix(sshd:session): session closed for user core Nov 4 23:56:15.533060 systemd-logind[1604]: Session 7 logged out. Waiting for processes to exit. Nov 4 23:56:15.533826 systemd[1]: sshd@6-46.62.221.150:22-147.75.109.163:44872.service: Deactivated successfully. Nov 4 23:56:15.537429 systemd[1]: session-7.scope: Deactivated successfully. Nov 4 23:56:15.537758 systemd[1]: session-7.scope: Consumed 5.452s CPU time, 154.6M memory peak. Nov 4 23:56:15.540387 systemd-logind[1604]: Removed session 7. Nov 4 23:56:20.880558 systemd[1]: Created slice kubepods-besteffort-pod98a16a80_0a1a_422c_9e61_fd21dba3eadc.slice - libcontainer container kubepods-besteffort-pod98a16a80_0a1a_422c_9e61_fd21dba3eadc.slice. Nov 4 23:56:20.882508 kubelet[2826]: I1104 23:56:20.882426 2826 status_manager.go:895] "Failed to get status for pod" podUID="98a16a80-0a1a-422c-9e61-fd21dba3eadc" pod="calico-system/calico-typha-84b969dc9b-dj4hv" err="pods \"calico-typha-84b969dc9b-dj4hv\" is forbidden: User \"system:node:ci-4487-0-0-n-1c2c5ddea4\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4487-0-0-n-1c2c5ddea4' and this object" Nov 4 23:56:20.884423 kubelet[2826]: E1104 23:56:20.883548 2826 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:ci-4487-0-0-n-1c2c5ddea4\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4487-0-0-n-1c2c5ddea4' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"tigera-ca-bundle\"" type="*v1.ConfigMap" Nov 4 23:56:20.885515 kubelet[2826]: E1104 23:56:20.883578 2826 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4487-0-0-n-1c2c5ddea4\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4487-0-0-n-1c2c5ddea4' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Nov 4 23:56:20.987897 kubelet[2826]: I1104 23:56:20.987790 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlnwv\" (UniqueName: \"kubernetes.io/projected/98a16a80-0a1a-422c-9e61-fd21dba3eadc-kube-api-access-tlnwv\") pod \"calico-typha-84b969dc9b-dj4hv\" (UID: \"98a16a80-0a1a-422c-9e61-fd21dba3eadc\") " pod="calico-system/calico-typha-84b969dc9b-dj4hv" Nov 4 23:56:20.987897 kubelet[2826]: I1104 23:56:20.987885 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/98a16a80-0a1a-422c-9e61-fd21dba3eadc-typha-certs\") pod \"calico-typha-84b969dc9b-dj4hv\" (UID: \"98a16a80-0a1a-422c-9e61-fd21dba3eadc\") " pod="calico-system/calico-typha-84b969dc9b-dj4hv" Nov 4 23:56:20.988118 kubelet[2826]: I1104 23:56:20.987916 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98a16a80-0a1a-422c-9e61-fd21dba3eadc-tigera-ca-bundle\") pod \"calico-typha-84b969dc9b-dj4hv\" (UID: \"98a16a80-0a1a-422c-9e61-fd21dba3eadc\") " pod="calico-system/calico-typha-84b969dc9b-dj4hv" Nov 4 23:56:21.116685 systemd[1]: Created slice kubepods-besteffort-pod7dd6ce51_97c6_4c6f_8703_937244e86ab5.slice - libcontainer container kubepods-besteffort-pod7dd6ce51_97c6_4c6f_8703_937244e86ab5.slice. Nov 4 23:56:21.189605 kubelet[2826]: I1104 23:56:21.189201 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7dd6ce51-97c6-4c6f-8703-937244e86ab5-var-run-calico\") pod \"calico-node-6vhbh\" (UID: \"7dd6ce51-97c6-4c6f-8703-937244e86ab5\") " pod="calico-system/calico-node-6vhbh" Nov 4 23:56:21.189605 kubelet[2826]: I1104 23:56:21.189248 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7dd6ce51-97c6-4c6f-8703-937244e86ab5-tigera-ca-bundle\") pod \"calico-node-6vhbh\" (UID: \"7dd6ce51-97c6-4c6f-8703-937244e86ab5\") " pod="calico-system/calico-node-6vhbh" Nov 4 23:56:21.189605 kubelet[2826]: I1104 23:56:21.189261 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7dd6ce51-97c6-4c6f-8703-937244e86ab5-var-lib-calico\") pod \"calico-node-6vhbh\" (UID: \"7dd6ce51-97c6-4c6f-8703-937244e86ab5\") " pod="calico-system/calico-node-6vhbh" Nov 4 23:56:21.189605 kubelet[2826]: I1104 23:56:21.189283 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7dd6ce51-97c6-4c6f-8703-937244e86ab5-cni-log-dir\") pod \"calico-node-6vhbh\" (UID: \"7dd6ce51-97c6-4c6f-8703-937244e86ab5\") " pod="calico-system/calico-node-6vhbh" Nov 4 23:56:21.189605 kubelet[2826]: I1104 23:56:21.189295 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7dd6ce51-97c6-4c6f-8703-937244e86ab5-cni-bin-dir\") pod \"calico-node-6vhbh\" (UID: \"7dd6ce51-97c6-4c6f-8703-937244e86ab5\") " pod="calico-system/calico-node-6vhbh" Nov 4 23:56:21.189872 kubelet[2826]: I1104 23:56:21.189311 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7dd6ce51-97c6-4c6f-8703-937244e86ab5-xtables-lock\") pod \"calico-node-6vhbh\" (UID: \"7dd6ce51-97c6-4c6f-8703-937244e86ab5\") " pod="calico-system/calico-node-6vhbh" Nov 4 23:56:21.189872 kubelet[2826]: I1104 23:56:21.189325 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7dd6ce51-97c6-4c6f-8703-937244e86ab5-cni-net-dir\") pod \"calico-node-6vhbh\" (UID: \"7dd6ce51-97c6-4c6f-8703-937244e86ab5\") " pod="calico-system/calico-node-6vhbh" Nov 4 23:56:21.189872 kubelet[2826]: I1104 23:56:21.189337 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7dd6ce51-97c6-4c6f-8703-937244e86ab5-flexvol-driver-host\") pod \"calico-node-6vhbh\" (UID: \"7dd6ce51-97c6-4c6f-8703-937244e86ab5\") " pod="calico-system/calico-node-6vhbh" Nov 4 23:56:21.189872 kubelet[2826]: I1104 23:56:21.189351 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7dd6ce51-97c6-4c6f-8703-937244e86ab5-node-certs\") pod \"calico-node-6vhbh\" (UID: \"7dd6ce51-97c6-4c6f-8703-937244e86ab5\") " pod="calico-system/calico-node-6vhbh" Nov 4 23:56:21.189872 kubelet[2826]: I1104 23:56:21.189362 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrxfp\" (UniqueName: \"kubernetes.io/projected/7dd6ce51-97c6-4c6f-8703-937244e86ab5-kube-api-access-rrxfp\") pod \"calico-node-6vhbh\" (UID: \"7dd6ce51-97c6-4c6f-8703-937244e86ab5\") " pod="calico-system/calico-node-6vhbh" Nov 4 23:56:21.190584 kubelet[2826]: I1104 23:56:21.189374 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7dd6ce51-97c6-4c6f-8703-937244e86ab5-lib-modules\") pod \"calico-node-6vhbh\" (UID: \"7dd6ce51-97c6-4c6f-8703-937244e86ab5\") " pod="calico-system/calico-node-6vhbh" Nov 4 23:56:21.190584 kubelet[2826]: I1104 23:56:21.189387 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7dd6ce51-97c6-4c6f-8703-937244e86ab5-policysync\") pod \"calico-node-6vhbh\" (UID: \"7dd6ce51-97c6-4c6f-8703-937244e86ab5\") " pod="calico-system/calico-node-6vhbh" Nov 4 23:56:21.281079 kubelet[2826]: E1104 23:56:21.280553 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4ms7h" podUID="380d0997-b155-4f76-994b-7e2911c8cbf8" Nov 4 23:56:21.299536 kubelet[2826]: E1104 23:56:21.298791 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.299536 kubelet[2826]: W1104 23:56:21.298845 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.299536 kubelet[2826]: E1104 23:56:21.298873 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.360063 kubelet[2826]: E1104 23:56:21.360024 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.360063 kubelet[2826]: W1104 23:56:21.360054 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.360199 kubelet[2826]: E1104 23:56:21.360079 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.360356 kubelet[2826]: E1104 23:56:21.360330 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.360356 kubelet[2826]: W1104 23:56:21.360351 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.360451 kubelet[2826]: E1104 23:56:21.360365 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.360712 kubelet[2826]: E1104 23:56:21.360661 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.360712 kubelet[2826]: W1104 23:56:21.360688 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.360712 kubelet[2826]: E1104 23:56:21.360712 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.361329 kubelet[2826]: E1104 23:56:21.361298 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.361329 kubelet[2826]: W1104 23:56:21.361316 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.361329 kubelet[2826]: E1104 23:56:21.361325 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.362011 kubelet[2826]: E1104 23:56:21.361972 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.362011 kubelet[2826]: W1104 23:56:21.361989 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.362011 kubelet[2826]: E1104 23:56:21.361998 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.362419 kubelet[2826]: E1104 23:56:21.362377 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.362419 kubelet[2826]: W1104 23:56:21.362387 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.362419 kubelet[2826]: E1104 23:56:21.362398 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.363258 kubelet[2826]: E1104 23:56:21.362549 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.363258 kubelet[2826]: W1104 23:56:21.362556 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.363258 kubelet[2826]: E1104 23:56:21.362566 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.363258 kubelet[2826]: E1104 23:56:21.362738 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.363258 kubelet[2826]: W1104 23:56:21.362747 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.363258 kubelet[2826]: E1104 23:56:21.362757 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.363258 kubelet[2826]: E1104 23:56:21.362991 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.363258 kubelet[2826]: W1104 23:56:21.362998 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.363258 kubelet[2826]: E1104 23:56:21.363005 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.363258 kubelet[2826]: E1104 23:56:21.363110 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.364335 kubelet[2826]: W1104 23:56:21.363118 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.364335 kubelet[2826]: E1104 23:56:21.363127 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.364335 kubelet[2826]: E1104 23:56:21.363239 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.364335 kubelet[2826]: W1104 23:56:21.363246 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.364335 kubelet[2826]: E1104 23:56:21.363254 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.364335 kubelet[2826]: E1104 23:56:21.363370 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.364335 kubelet[2826]: W1104 23:56:21.363377 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.364335 kubelet[2826]: E1104 23:56:21.363384 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.364335 kubelet[2826]: E1104 23:56:21.363514 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.364335 kubelet[2826]: W1104 23:56:21.363525 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.365786 kubelet[2826]: E1104 23:56:21.363532 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.365786 kubelet[2826]: E1104 23:56:21.363654 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.365786 kubelet[2826]: W1104 23:56:21.363662 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.365786 kubelet[2826]: E1104 23:56:21.363671 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.365786 kubelet[2826]: E1104 23:56:21.363765 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.365786 kubelet[2826]: W1104 23:56:21.363771 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.365786 kubelet[2826]: E1104 23:56:21.363777 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.365786 kubelet[2826]: E1104 23:56:21.363879 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.365786 kubelet[2826]: W1104 23:56:21.363885 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.365786 kubelet[2826]: E1104 23:56:21.363891 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.366404 kubelet[2826]: E1104 23:56:21.363996 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.366404 kubelet[2826]: W1104 23:56:21.364004 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.366404 kubelet[2826]: E1104 23:56:21.364013 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.366404 kubelet[2826]: E1104 23:56:21.364101 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.366404 kubelet[2826]: W1104 23:56:21.364107 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.366404 kubelet[2826]: E1104 23:56:21.364113 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.366404 kubelet[2826]: E1104 23:56:21.364196 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.366404 kubelet[2826]: W1104 23:56:21.364203 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.366404 kubelet[2826]: E1104 23:56:21.364208 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.366404 kubelet[2826]: E1104 23:56:21.364292 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.367097 kubelet[2826]: W1104 23:56:21.364298 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.367097 kubelet[2826]: E1104 23:56:21.364304 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.390681 kubelet[2826]: E1104 23:56:21.390542 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.390681 kubelet[2826]: W1104 23:56:21.390590 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.390681 kubelet[2826]: E1104 23:56:21.390618 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.391353 kubelet[2826]: I1104 23:56:21.390905 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/380d0997-b155-4f76-994b-7e2911c8cbf8-kubelet-dir\") pod \"csi-node-driver-4ms7h\" (UID: \"380d0997-b155-4f76-994b-7e2911c8cbf8\") " pod="calico-system/csi-node-driver-4ms7h" Nov 4 23:56:21.391855 kubelet[2826]: E1104 23:56:21.391721 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.391855 kubelet[2826]: W1104 23:56:21.391760 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.391855 kubelet[2826]: E1104 23:56:21.391780 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.392676 kubelet[2826]: E1104 23:56:21.392625 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.392918 kubelet[2826]: W1104 23:56:21.392783 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.393065 kubelet[2826]: E1104 23:56:21.393025 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.393654 kubelet[2826]: E1104 23:56:21.393614 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.393654 kubelet[2826]: W1104 23:56:21.393639 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.393654 kubelet[2826]: E1104 23:56:21.393656 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.393911 kubelet[2826]: I1104 23:56:21.393711 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/380d0997-b155-4f76-994b-7e2911c8cbf8-registration-dir\") pod \"csi-node-driver-4ms7h\" (UID: \"380d0997-b155-4f76-994b-7e2911c8cbf8\") " pod="calico-system/csi-node-driver-4ms7h" Nov 4 23:56:21.394150 kubelet[2826]: E1104 23:56:21.394115 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.394150 kubelet[2826]: W1104 23:56:21.394141 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.394239 kubelet[2826]: E1104 23:56:21.394158 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.394536 kubelet[2826]: E1104 23:56:21.394467 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.394536 kubelet[2826]: W1104 23:56:21.394528 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.394635 kubelet[2826]: E1104 23:56:21.394544 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.394913 kubelet[2826]: E1104 23:56:21.394880 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.394913 kubelet[2826]: W1104 23:56:21.394903 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.394996 kubelet[2826]: E1104 23:56:21.394918 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.394996 kubelet[2826]: I1104 23:56:21.394971 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/380d0997-b155-4f76-994b-7e2911c8cbf8-socket-dir\") pod \"csi-node-driver-4ms7h\" (UID: \"380d0997-b155-4f76-994b-7e2911c8cbf8\") " pod="calico-system/csi-node-driver-4ms7h" Nov 4 23:56:21.395354 kubelet[2826]: E1104 23:56:21.395320 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.395354 kubelet[2826]: W1104 23:56:21.395345 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.395439 kubelet[2826]: E1104 23:56:21.395359 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.395499 kubelet[2826]: I1104 23:56:21.395425 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/380d0997-b155-4f76-994b-7e2911c8cbf8-varrun\") pod \"csi-node-driver-4ms7h\" (UID: \"380d0997-b155-4f76-994b-7e2911c8cbf8\") " pod="calico-system/csi-node-driver-4ms7h" Nov 4 23:56:21.395843 kubelet[2826]: E1104 23:56:21.395782 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.395843 kubelet[2826]: W1104 23:56:21.395834 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.395927 kubelet[2826]: E1104 23:56:21.395850 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.396146 kubelet[2826]: E1104 23:56:21.396087 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.396146 kubelet[2826]: W1104 23:56:21.396135 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.396223 kubelet[2826]: E1104 23:56:21.396149 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.396543 kubelet[2826]: E1104 23:56:21.396468 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.396543 kubelet[2826]: W1104 23:56:21.396528 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.396543 kubelet[2826]: E1104 23:56:21.396543 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.396894 kubelet[2826]: I1104 23:56:21.396610 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t46m6\" (UniqueName: \"kubernetes.io/projected/380d0997-b155-4f76-994b-7e2911c8cbf8-kube-api-access-t46m6\") pod \"csi-node-driver-4ms7h\" (UID: \"380d0997-b155-4f76-994b-7e2911c8cbf8\") " pod="calico-system/csi-node-driver-4ms7h" Nov 4 23:56:21.397080 kubelet[2826]: E1104 23:56:21.396957 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.397080 kubelet[2826]: W1104 23:56:21.396973 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.397080 kubelet[2826]: E1104 23:56:21.396987 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.397286 kubelet[2826]: E1104 23:56:21.397264 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.397286 kubelet[2826]: W1104 23:56:21.397282 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.397376 kubelet[2826]: E1104 23:56:21.397296 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.397942 kubelet[2826]: E1104 23:56:21.397908 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.397942 kubelet[2826]: W1104 23:56:21.397932 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.398034 kubelet[2826]: E1104 23:56:21.397950 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.398317 kubelet[2826]: E1104 23:56:21.398209 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.398317 kubelet[2826]: W1104 23:56:21.398235 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.398317 kubelet[2826]: E1104 23:56:21.398258 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.499298 kubelet[2826]: E1104 23:56:21.499151 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.500915 kubelet[2826]: W1104 23:56:21.499548 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.500915 kubelet[2826]: E1104 23:56:21.499596 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.500915 kubelet[2826]: E1104 23:56:21.500625 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.500915 kubelet[2826]: W1104 23:56:21.500656 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.500915 kubelet[2826]: E1104 23:56:21.500687 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.502035 kubelet[2826]: E1104 23:56:21.501534 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.502035 kubelet[2826]: W1104 23:56:21.501564 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.502035 kubelet[2826]: E1104 23:56:21.501581 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.502035 kubelet[2826]: E1104 23:56:21.502037 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.502035 kubelet[2826]: W1104 23:56:21.502057 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.502935 kubelet[2826]: E1104 23:56:21.502078 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.502935 kubelet[2826]: E1104 23:56:21.502918 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.502935 kubelet[2826]: W1104 23:56:21.502933 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.504189 kubelet[2826]: E1104 23:56:21.502951 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.504189 kubelet[2826]: E1104 23:56:21.503654 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.504189 kubelet[2826]: W1104 23:56:21.503670 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.504189 kubelet[2826]: E1104 23:56:21.503686 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.504350 kubelet[2826]: E1104 23:56:21.504287 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.504350 kubelet[2826]: W1104 23:56:21.504304 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.504350 kubelet[2826]: E1104 23:56:21.504319 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.505654 kubelet[2826]: E1104 23:56:21.504978 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.505654 kubelet[2826]: W1104 23:56:21.504995 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.505654 kubelet[2826]: E1104 23:56:21.505010 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.507022 kubelet[2826]: E1104 23:56:21.506453 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.507022 kubelet[2826]: W1104 23:56:21.506822 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.507022 kubelet[2826]: E1104 23:56:21.506857 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.508044 kubelet[2826]: E1104 23:56:21.507987 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.508152 kubelet[2826]: W1104 23:56:21.508051 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.508152 kubelet[2826]: E1104 23:56:21.508070 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.510883 kubelet[2826]: E1104 23:56:21.510843 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.510883 kubelet[2826]: W1104 23:56:21.510873 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.511076 kubelet[2826]: E1104 23:56:21.510894 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.511609 kubelet[2826]: E1104 23:56:21.511574 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.511609 kubelet[2826]: W1104 23:56:21.511601 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.511728 kubelet[2826]: E1104 23:56:21.511617 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.512017 kubelet[2826]: E1104 23:56:21.511981 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.512017 kubelet[2826]: W1104 23:56:21.512007 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.512017 kubelet[2826]: E1104 23:56:21.512020 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.512447 kubelet[2826]: E1104 23:56:21.512410 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.512447 kubelet[2826]: W1104 23:56:21.512439 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.512601 kubelet[2826]: E1104 23:56:21.512454 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.512863 kubelet[2826]: E1104 23:56:21.512822 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.512863 kubelet[2826]: W1104 23:56:21.512846 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.512863 kubelet[2826]: E1104 23:56:21.512859 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.513328 kubelet[2826]: E1104 23:56:21.513296 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.513328 kubelet[2826]: W1104 23:56:21.513321 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.513437 kubelet[2826]: E1104 23:56:21.513334 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.514018 kubelet[2826]: E1104 23:56:21.513984 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.514018 kubelet[2826]: W1104 23:56:21.514010 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.514159 kubelet[2826]: E1104 23:56:21.514024 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.514695 kubelet[2826]: E1104 23:56:21.514658 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.514695 kubelet[2826]: W1104 23:56:21.514684 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.514695 kubelet[2826]: E1104 23:56:21.514698 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.515179 kubelet[2826]: E1104 23:56:21.515140 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.515179 kubelet[2826]: W1104 23:56:21.515173 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.515270 kubelet[2826]: E1104 23:56:21.515189 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.515595 kubelet[2826]: E1104 23:56:21.515563 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.515595 kubelet[2826]: W1104 23:56:21.515583 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.515595 kubelet[2826]: E1104 23:56:21.515597 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.515963 kubelet[2826]: E1104 23:56:21.515932 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.515963 kubelet[2826]: W1104 23:56:21.515953 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.515963 kubelet[2826]: E1104 23:56:21.515965 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.516356 kubelet[2826]: E1104 23:56:21.516321 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.516356 kubelet[2826]: W1104 23:56:21.516345 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.516433 kubelet[2826]: E1104 23:56:21.516359 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.516870 kubelet[2826]: E1104 23:56:21.516826 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.516870 kubelet[2826]: W1104 23:56:21.516857 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.516961 kubelet[2826]: E1104 23:56:21.516879 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.517212 kubelet[2826]: E1104 23:56:21.517180 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.517212 kubelet[2826]: W1104 23:56:21.517202 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.517298 kubelet[2826]: E1104 23:56:21.517215 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.518977 kubelet[2826]: E1104 23:56:21.518929 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.518977 kubelet[2826]: W1104 23:56:21.518953 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.518977 kubelet[2826]: E1104 23:56:21.518969 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.937297 kubelet[2826]: E1104 23:56:21.936685 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.937297 kubelet[2826]: W1104 23:56:21.936714 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.937297 kubelet[2826]: E1104 23:56:21.936740 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.947003 kubelet[2826]: E1104 23:56:21.946570 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.947003 kubelet[2826]: W1104 23:56:21.946604 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.947003 kubelet[2826]: E1104 23:56:21.946629 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:21.956171 kubelet[2826]: E1104 23:56:21.956141 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:21.956171 kubelet[2826]: W1104 23:56:21.956160 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:21.956319 kubelet[2826]: E1104 23:56:21.956174 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:22.089800 kubelet[2826]: E1104 23:56:22.089697 2826 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Nov 4 23:56:22.089995 kubelet[2826]: E1104 23:56:22.089943 2826 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/98a16a80-0a1a-422c-9e61-fd21dba3eadc-tigera-ca-bundle podName:98a16a80-0a1a-422c-9e61-fd21dba3eadc nodeName:}" failed. No retries permitted until 2025-11-04 23:56:22.589895614 +0000 UTC m=+21.740634758 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/98a16a80-0a1a-422c-9e61-fd21dba3eadc-tigera-ca-bundle") pod "calico-typha-84b969dc9b-dj4hv" (UID: "98a16a80-0a1a-422c-9e61-fd21dba3eadc") : failed to sync configmap cache: timed out waiting for the condition Nov 4 23:56:22.109216 kubelet[2826]: E1104 23:56:22.109170 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:22.109216 kubelet[2826]: W1104 23:56:22.109201 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:22.109432 kubelet[2826]: E1104 23:56:22.109227 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:22.210845 kubelet[2826]: E1104 23:56:22.210465 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:22.210845 kubelet[2826]: W1104 23:56:22.210525 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:22.210845 kubelet[2826]: E1104 23:56:22.210553 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:22.222013 kubelet[2826]: E1104 23:56:22.221969 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:22.222102 kubelet[2826]: W1104 23:56:22.222000 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:22.222102 kubelet[2826]: E1104 23:56:22.222065 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:22.311700 kubelet[2826]: E1104 23:56:22.311652 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:22.311700 kubelet[2826]: W1104 23:56:22.311681 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:22.311700 kubelet[2826]: E1104 23:56:22.311706 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:22.320807 containerd[1627]: time="2025-11-04T23:56:22.320709778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6vhbh,Uid:7dd6ce51-97c6-4c6f-8703-937244e86ab5,Namespace:calico-system,Attempt:0,}" Nov 4 23:56:22.351940 containerd[1627]: time="2025-11-04T23:56:22.351882414Z" level=info msg="connecting to shim 9d644951b87a9bb821c0c715d808612a15525df1f1e09c52de497a67621a13b4" address="unix:///run/containerd/s/5cf78ca0d88f4142ec12f5820760ae4eff15e9022c475dba1932780704f30e49" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:22.394710 systemd[1]: Started cri-containerd-9d644951b87a9bb821c0c715d808612a15525df1f1e09c52de497a67621a13b4.scope - libcontainer container 9d644951b87a9bb821c0c715d808612a15525df1f1e09c52de497a67621a13b4. Nov 4 23:56:22.413849 kubelet[2826]: E1104 23:56:22.413799 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:22.413849 kubelet[2826]: W1104 23:56:22.413836 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:22.414066 kubelet[2826]: E1104 23:56:22.413869 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:22.431578 containerd[1627]: time="2025-11-04T23:56:22.431514769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6vhbh,Uid:7dd6ce51-97c6-4c6f-8703-937244e86ab5,Namespace:calico-system,Attempt:0,} returns sandbox id \"9d644951b87a9bb821c0c715d808612a15525df1f1e09c52de497a67621a13b4\"" Nov 4 23:56:22.442884 containerd[1627]: time="2025-11-04T23:56:22.442851385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 4 23:56:22.515117 kubelet[2826]: E1104 23:56:22.515051 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:22.515117 kubelet[2826]: W1104 23:56:22.515082 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:22.515117 kubelet[2826]: E1104 23:56:22.515107 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:22.616462 kubelet[2826]: E1104 23:56:22.616421 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:22.616462 kubelet[2826]: W1104 23:56:22.616444 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:22.616462 kubelet[2826]: E1104 23:56:22.616463 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:22.617209 kubelet[2826]: E1104 23:56:22.616701 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:22.617209 kubelet[2826]: W1104 23:56:22.616710 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:22.617209 kubelet[2826]: E1104 23:56:22.616719 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:22.617209 kubelet[2826]: E1104 23:56:22.616869 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:22.617209 kubelet[2826]: W1104 23:56:22.616877 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:22.617209 kubelet[2826]: E1104 23:56:22.616885 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:22.617209 kubelet[2826]: E1104 23:56:22.616996 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:22.617209 kubelet[2826]: W1104 23:56:22.617002 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:22.617209 kubelet[2826]: E1104 23:56:22.617011 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:22.617209 kubelet[2826]: E1104 23:56:22.617166 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:22.617748 kubelet[2826]: W1104 23:56:22.617174 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:22.617748 kubelet[2826]: E1104 23:56:22.617181 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:22.618258 kubelet[2826]: E1104 23:56:22.618033 2826 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 4 23:56:22.618258 kubelet[2826]: W1104 23:56:22.618052 2826 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 4 23:56:22.618258 kubelet[2826]: E1104 23:56:22.618066 2826 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 4 23:56:22.687322 containerd[1627]: time="2025-11-04T23:56:22.687257287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84b969dc9b-dj4hv,Uid:98a16a80-0a1a-422c-9e61-fd21dba3eadc,Namespace:calico-system,Attempt:0,}" Nov 4 23:56:22.709281 containerd[1627]: time="2025-11-04T23:56:22.708806801Z" level=info msg="connecting to shim 4a90714d5eac9d1dc2f845282742c610d7d427cc5b0458bf464c740325aa41ff" address="unix:///run/containerd/s/072451384b52a49fe341db7656e8aab284a187c83c5ebbf92519226f8b4670af" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:22.765679 systemd[1]: Started cri-containerd-4a90714d5eac9d1dc2f845282742c610d7d427cc5b0458bf464c740325aa41ff.scope - libcontainer container 4a90714d5eac9d1dc2f845282742c610d7d427cc5b0458bf464c740325aa41ff. Nov 4 23:56:22.856292 containerd[1627]: time="2025-11-04T23:56:22.856253400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84b969dc9b-dj4hv,Uid:98a16a80-0a1a-422c-9e61-fd21dba3eadc,Namespace:calico-system,Attempt:0,} returns sandbox id \"4a90714d5eac9d1dc2f845282742c610d7d427cc5b0458bf464c740325aa41ff\"" Nov 4 23:56:22.964901 kubelet[2826]: E1104 23:56:22.964872 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4ms7h" podUID="380d0997-b155-4f76-994b-7e2911c8cbf8" Nov 4 23:56:24.061786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1575431025.mount: Deactivated successfully. Nov 4 23:56:24.140059 containerd[1627]: time="2025-11-04T23:56:24.139982351Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:24.141502 containerd[1627]: time="2025-11-04T23:56:24.141289422Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Nov 4 23:56:24.142557 containerd[1627]: time="2025-11-04T23:56:24.142530470Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:24.144963 containerd[1627]: time="2025-11-04T23:56:24.144934632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:24.145645 containerd[1627]: time="2025-11-04T23:56:24.145605119Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.702572086s" Nov 4 23:56:24.145763 containerd[1627]: time="2025-11-04T23:56:24.145741161Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 4 23:56:24.148698 containerd[1627]: time="2025-11-04T23:56:24.148644672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 4 23:56:24.149673 containerd[1627]: time="2025-11-04T23:56:24.149594167Z" level=info msg="CreateContainer within sandbox \"9d644951b87a9bb821c0c715d808612a15525df1f1e09c52de497a67621a13b4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 4 23:56:24.159798 containerd[1627]: time="2025-11-04T23:56:24.159764022Z" level=info msg="Container eaf6c716cfdbf4713a20e90ae77f260f198f5a9d4f9b89abf0caa6cb41f5bd6a: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:56:24.171839 containerd[1627]: time="2025-11-04T23:56:24.171786343Z" level=info msg="CreateContainer within sandbox \"9d644951b87a9bb821c0c715d808612a15525df1f1e09c52de497a67621a13b4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"eaf6c716cfdbf4713a20e90ae77f260f198f5a9d4f9b89abf0caa6cb41f5bd6a\"" Nov 4 23:56:24.173324 containerd[1627]: time="2025-11-04T23:56:24.172235798Z" level=info msg="StartContainer for \"eaf6c716cfdbf4713a20e90ae77f260f198f5a9d4f9b89abf0caa6cb41f5bd6a\"" Nov 4 23:56:24.173324 containerd[1627]: time="2025-11-04T23:56:24.173278287Z" level=info msg="connecting to shim eaf6c716cfdbf4713a20e90ae77f260f198f5a9d4f9b89abf0caa6cb41f5bd6a" address="unix:///run/containerd/s/5cf78ca0d88f4142ec12f5820760ae4eff15e9022c475dba1932780704f30e49" protocol=ttrpc version=3 Nov 4 23:56:24.198595 systemd[1]: Started cri-containerd-eaf6c716cfdbf4713a20e90ae77f260f198f5a9d4f9b89abf0caa6cb41f5bd6a.scope - libcontainer container eaf6c716cfdbf4713a20e90ae77f260f198f5a9d4f9b89abf0caa6cb41f5bd6a. Nov 4 23:56:24.250056 containerd[1627]: time="2025-11-04T23:56:24.249968191Z" level=info msg="StartContainer for \"eaf6c716cfdbf4713a20e90ae77f260f198f5a9d4f9b89abf0caa6cb41f5bd6a\" returns successfully" Nov 4 23:56:24.255342 systemd[1]: cri-containerd-eaf6c716cfdbf4713a20e90ae77f260f198f5a9d4f9b89abf0caa6cb41f5bd6a.scope: Deactivated successfully. Nov 4 23:56:24.272787 containerd[1627]: time="2025-11-04T23:56:24.272732410Z" level=info msg="received exit event container_id:\"eaf6c716cfdbf4713a20e90ae77f260f198f5a9d4f9b89abf0caa6cb41f5bd6a\" id:\"eaf6c716cfdbf4713a20e90ae77f260f198f5a9d4f9b89abf0caa6cb41f5bd6a\" pid:3425 exited_at:{seconds:1762300584 nanos:261034953}" Nov 4 23:56:24.303151 containerd[1627]: time="2025-11-04T23:56:24.303071867Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eaf6c716cfdbf4713a20e90ae77f260f198f5a9d4f9b89abf0caa6cb41f5bd6a\" id:\"eaf6c716cfdbf4713a20e90ae77f260f198f5a9d4f9b89abf0caa6cb41f5bd6a\" pid:3425 exited_at:{seconds:1762300584 nanos:261034953}" Nov 4 23:56:24.966126 kubelet[2826]: E1104 23:56:24.965971 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4ms7h" podUID="380d0997-b155-4f76-994b-7e2911c8cbf8" Nov 4 23:56:25.021899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eaf6c716cfdbf4713a20e90ae77f260f198f5a9d4f9b89abf0caa6cb41f5bd6a-rootfs.mount: Deactivated successfully. Nov 4 23:56:26.276274 containerd[1627]: time="2025-11-04T23:56:26.276219255Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:26.277676 containerd[1627]: time="2025-11-04T23:56:26.277492086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33739890" Nov 4 23:56:26.278786 containerd[1627]: time="2025-11-04T23:56:26.278760656Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:26.280877 containerd[1627]: time="2025-11-04T23:56:26.280840939Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:26.281461 containerd[1627]: time="2025-11-04T23:56:26.281432419Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.132618022s" Nov 4 23:56:26.281559 containerd[1627]: time="2025-11-04T23:56:26.281545150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 4 23:56:26.283018 containerd[1627]: time="2025-11-04T23:56:26.282986322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 4 23:56:26.298683 containerd[1627]: time="2025-11-04T23:56:26.298627605Z" level=info msg="CreateContainer within sandbox \"4a90714d5eac9d1dc2f845282742c610d7d427cc5b0458bf464c740325aa41ff\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 4 23:56:26.306298 containerd[1627]: time="2025-11-04T23:56:26.304607206Z" level=info msg="Container 996a83ada5840323aab9a80e862d628294738e923f247fc46b53d119d7de260c: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:56:26.311162 containerd[1627]: time="2025-11-04T23:56:26.311125338Z" level=info msg="CreateContainer within sandbox \"4a90714d5eac9d1dc2f845282742c610d7d427cc5b0458bf464c740325aa41ff\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"996a83ada5840323aab9a80e862d628294738e923f247fc46b53d119d7de260c\"" Nov 4 23:56:26.311622 containerd[1627]: time="2025-11-04T23:56:26.311597357Z" level=info msg="StartContainer for \"996a83ada5840323aab9a80e862d628294738e923f247fc46b53d119d7de260c\"" Nov 4 23:56:26.312301 containerd[1627]: time="2025-11-04T23:56:26.312271962Z" level=info msg="connecting to shim 996a83ada5840323aab9a80e862d628294738e923f247fc46b53d119d7de260c" address="unix:///run/containerd/s/072451384b52a49fe341db7656e8aab284a187c83c5ebbf92519226f8b4670af" protocol=ttrpc version=3 Nov 4 23:56:26.337606 systemd[1]: Started cri-containerd-996a83ada5840323aab9a80e862d628294738e923f247fc46b53d119d7de260c.scope - libcontainer container 996a83ada5840323aab9a80e862d628294738e923f247fc46b53d119d7de260c. Nov 4 23:56:26.385979 containerd[1627]: time="2025-11-04T23:56:26.385833955Z" level=info msg="StartContainer for \"996a83ada5840323aab9a80e862d628294738e923f247fc46b53d119d7de260c\" returns successfully" Nov 4 23:56:26.966159 kubelet[2826]: E1104 23:56:26.966097 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4ms7h" podUID="380d0997-b155-4f76-994b-7e2911c8cbf8" Nov 4 23:56:28.082186 kubelet[2826]: I1104 23:56:28.082060 2826 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 23:56:28.864372 containerd[1627]: time="2025-11-04T23:56:28.864322051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:28.865461 containerd[1627]: time="2025-11-04T23:56:28.865301094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 4 23:56:28.866251 containerd[1627]: time="2025-11-04T23:56:28.866225817Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:28.867929 containerd[1627]: time="2025-11-04T23:56:28.867898674Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:28.868439 containerd[1627]: time="2025-11-04T23:56:28.868415396Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.585394829s" Nov 4 23:56:28.868782 containerd[1627]: time="2025-11-04T23:56:28.868524549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 4 23:56:28.871625 containerd[1627]: time="2025-11-04T23:56:28.871588826Z" level=info msg="CreateContainer within sandbox \"9d644951b87a9bb821c0c715d808612a15525df1f1e09c52de497a67621a13b4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 4 23:56:28.881522 containerd[1627]: time="2025-11-04T23:56:28.880635316Z" level=info msg="Container a454bd56eaa9046cad6726384f842615751357449bd2ba43c216b2d7d6cd8b2c: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:56:28.893649 containerd[1627]: time="2025-11-04T23:56:28.893487835Z" level=info msg="CreateContainer within sandbox \"9d644951b87a9bb821c0c715d808612a15525df1f1e09c52de497a67621a13b4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a454bd56eaa9046cad6726384f842615751357449bd2ba43c216b2d7d6cd8b2c\"" Nov 4 23:56:28.898125 containerd[1627]: time="2025-11-04T23:56:28.898107309Z" level=info msg="StartContainer for \"a454bd56eaa9046cad6726384f842615751357449bd2ba43c216b2d7d6cd8b2c\"" Nov 4 23:56:28.909980 containerd[1627]: time="2025-11-04T23:56:28.909962331Z" level=info msg="connecting to shim a454bd56eaa9046cad6726384f842615751357449bd2ba43c216b2d7d6cd8b2c" address="unix:///run/containerd/s/5cf78ca0d88f4142ec12f5820760ae4eff15e9022c475dba1932780704f30e49" protocol=ttrpc version=3 Nov 4 23:56:28.934599 systemd[1]: Started cri-containerd-a454bd56eaa9046cad6726384f842615751357449bd2ba43c216b2d7d6cd8b2c.scope - libcontainer container a454bd56eaa9046cad6726384f842615751357449bd2ba43c216b2d7d6cd8b2c. Nov 4 23:56:28.965997 kubelet[2826]: E1104 23:56:28.965970 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4ms7h" podUID="380d0997-b155-4f76-994b-7e2911c8cbf8" Nov 4 23:56:28.988285 containerd[1627]: time="2025-11-04T23:56:28.987606450Z" level=info msg="StartContainer for \"a454bd56eaa9046cad6726384f842615751357449bd2ba43c216b2d7d6cd8b2c\" returns successfully" Nov 4 23:56:29.107116 kubelet[2826]: I1104 23:56:29.107005 2826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-84b969dc9b-dj4hv" podStartSLOduration=5.68393473 podStartE2EDuration="9.106993683s" podCreationTimestamp="2025-11-04 23:56:20 +0000 UTC" firstStartedPulling="2025-11-04 23:56:22.859308397 +0000 UTC m=+22.010047510" lastFinishedPulling="2025-11-04 23:56:26.28236735 +0000 UTC m=+25.433106463" observedRunningTime="2025-11-04 23:56:27.092720824 +0000 UTC m=+26.243459937" watchObservedRunningTime="2025-11-04 23:56:29.106993683 +0000 UTC m=+28.257732807" Nov 4 23:56:29.419361 systemd[1]: cri-containerd-a454bd56eaa9046cad6726384f842615751357449bd2ba43c216b2d7d6cd8b2c.scope: Deactivated successfully. Nov 4 23:56:29.419707 systemd[1]: cri-containerd-a454bd56eaa9046cad6726384f842615751357449bd2ba43c216b2d7d6cd8b2c.scope: Consumed 399ms CPU time, 159.7M memory peak, 9.6M read from disk, 171.3M written to disk. Nov 4 23:56:29.496725 containerd[1627]: time="2025-11-04T23:56:29.496690979Z" level=info msg="received exit event container_id:\"a454bd56eaa9046cad6726384f842615751357449bd2ba43c216b2d7d6cd8b2c\" id:\"a454bd56eaa9046cad6726384f842615751357449bd2ba43c216b2d7d6cd8b2c\" pid:3524 exited_at:{seconds:1762300589 nanos:476139398}" Nov 4 23:56:29.497552 containerd[1627]: time="2025-11-04T23:56:29.497526787Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a454bd56eaa9046cad6726384f842615751357449bd2ba43c216b2d7d6cd8b2c\" id:\"a454bd56eaa9046cad6726384f842615751357449bd2ba43c216b2d7d6cd8b2c\" pid:3524 exited_at:{seconds:1762300589 nanos:476139398}" Nov 4 23:56:29.515027 kubelet[2826]: I1104 23:56:29.514990 2826 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 4 23:56:29.570187 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a454bd56eaa9046cad6726384f842615751357449bd2ba43c216b2d7d6cd8b2c-rootfs.mount: Deactivated successfully. Nov 4 23:56:29.615849 systemd[1]: Created slice kubepods-burstable-podef7f4d20_18d0_40c0_9e77_df43127da829.slice - libcontainer container kubepods-burstable-podef7f4d20_18d0_40c0_9e77_df43127da829.slice. Nov 4 23:56:29.625429 systemd[1]: Created slice kubepods-besteffort-pod8b968e43_5bbd_430c_9c38_670c7bbfd2f3.slice - libcontainer container kubepods-besteffort-pod8b968e43_5bbd_430c_9c38_670c7bbfd2f3.slice. Nov 4 23:56:29.632798 systemd[1]: Created slice kubepods-besteffort-pod931996dd_fd1b_4af9_a724_280dd54dbe3b.slice - libcontainer container kubepods-besteffort-pod931996dd_fd1b_4af9_a724_280dd54dbe3b.slice. Nov 4 23:56:29.640571 systemd[1]: Created slice kubepods-burstable-pod2dd6ec4e_4d6c_4ec9_b118_6c9da0be2ab2.slice - libcontainer container kubepods-burstable-pod2dd6ec4e_4d6c_4ec9_b118_6c9da0be2ab2.slice. Nov 4 23:56:29.648265 systemd[1]: Created slice kubepods-besteffort-podaf60ecf0_185c_46c3_9aed_e5c51cd74bb3.slice - libcontainer container kubepods-besteffort-podaf60ecf0_185c_46c3_9aed_e5c51cd74bb3.slice. Nov 4 23:56:29.654207 systemd[1]: Created slice kubepods-besteffort-pod13dd94ef_4602_4fcf_b36d_5a6661064a5d.slice - libcontainer container kubepods-besteffort-pod13dd94ef_4602_4fcf_b36d_5a6661064a5d.slice. Nov 4 23:56:29.658755 systemd[1]: Created slice kubepods-besteffort-pod2f84ec82_93eb_45f7_9d50_0aaa6ffeaaaa.slice - libcontainer container kubepods-besteffort-pod2f84ec82_93eb_45f7_9d50_0aaa6ffeaaaa.slice. Nov 4 23:56:29.664545 kubelet[2826]: I1104 23:56:29.664370 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm6b9\" (UniqueName: \"kubernetes.io/projected/8b968e43-5bbd-430c-9c38-670c7bbfd2f3-kube-api-access-dm6b9\") pod \"whisker-7dc658688-krcg4\" (UID: \"8b968e43-5bbd-430c-9c38-670c7bbfd2f3\") " pod="calico-system/whisker-7dc658688-krcg4" Nov 4 23:56:29.664545 kubelet[2826]: I1104 23:56:29.664402 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6nn7\" (UniqueName: \"kubernetes.io/projected/af60ecf0-185c-46c3-9aed-e5c51cd74bb3-kube-api-access-h6nn7\") pod \"calico-kube-controllers-7b747dc6fd-58ml9\" (UID: \"af60ecf0-185c-46c3-9aed-e5c51cd74bb3\") " pod="calico-system/calico-kube-controllers-7b747dc6fd-58ml9" Nov 4 23:56:29.664545 kubelet[2826]: I1104 23:56:29.664436 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13dd94ef-4602-4fcf-b36d-5a6661064a5d-goldmane-ca-bundle\") pod \"goldmane-666569f655-pxcr4\" (UID: \"13dd94ef-4602-4fcf-b36d-5a6661064a5d\") " pod="calico-system/goldmane-666569f655-pxcr4" Nov 4 23:56:29.664545 kubelet[2826]: I1104 23:56:29.664450 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/931996dd-fd1b-4af9-a724-280dd54dbe3b-calico-apiserver-certs\") pod \"calico-apiserver-677647d65b-8x7v5\" (UID: \"931996dd-fd1b-4af9-a724-280dd54dbe3b\") " pod="calico-apiserver/calico-apiserver-677647d65b-8x7v5" Nov 4 23:56:29.664545 kubelet[2826]: I1104 23:56:29.664463 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/13dd94ef-4602-4fcf-b36d-5a6661064a5d-goldmane-key-pair\") pod \"goldmane-666569f655-pxcr4\" (UID: \"13dd94ef-4602-4fcf-b36d-5a6661064a5d\") " pod="calico-system/goldmane-666569f655-pxcr4" Nov 4 23:56:29.664907 kubelet[2826]: I1104 23:56:29.664500 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhjtq\" (UniqueName: \"kubernetes.io/projected/13dd94ef-4602-4fcf-b36d-5a6661064a5d-kube-api-access-bhjtq\") pod \"goldmane-666569f655-pxcr4\" (UID: \"13dd94ef-4602-4fcf-b36d-5a6661064a5d\") " pod="calico-system/goldmane-666569f655-pxcr4" Nov 4 23:56:29.664907 kubelet[2826]: I1104 23:56:29.664518 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8mq9\" (UniqueName: \"kubernetes.io/projected/2f84ec82-93eb-45f7-9d50-0aaa6ffeaaaa-kube-api-access-t8mq9\") pod \"calico-apiserver-677647d65b-vprpk\" (UID: \"2f84ec82-93eb-45f7-9d50-0aaa6ffeaaaa\") " pod="calico-apiserver/calico-apiserver-677647d65b-vprpk" Nov 4 23:56:29.664907 kubelet[2826]: I1104 23:56:29.664538 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4zn4\" (UniqueName: \"kubernetes.io/projected/2dd6ec4e-4d6c-4ec9-b118-6c9da0be2ab2-kube-api-access-z4zn4\") pod \"coredns-674b8bbfcf-cjmfr\" (UID: \"2dd6ec4e-4d6c-4ec9-b118-6c9da0be2ab2\") " pod="kube-system/coredns-674b8bbfcf-cjmfr" Nov 4 23:56:29.664907 kubelet[2826]: I1104 23:56:29.664552 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13dd94ef-4602-4fcf-b36d-5a6661064a5d-config\") pod \"goldmane-666569f655-pxcr4\" (UID: \"13dd94ef-4602-4fcf-b36d-5a6661064a5d\") " pod="calico-system/goldmane-666569f655-pxcr4" Nov 4 23:56:29.664907 kubelet[2826]: I1104 23:56:29.664588 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8b968e43-5bbd-430c-9c38-670c7bbfd2f3-whisker-backend-key-pair\") pod \"whisker-7dc658688-krcg4\" (UID: \"8b968e43-5bbd-430c-9c38-670c7bbfd2f3\") " pod="calico-system/whisker-7dc658688-krcg4" Nov 4 23:56:29.665004 kubelet[2826]: I1104 23:56:29.664621 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b968e43-5bbd-430c-9c38-670c7bbfd2f3-whisker-ca-bundle\") pod \"whisker-7dc658688-krcg4\" (UID: \"8b968e43-5bbd-430c-9c38-670c7bbfd2f3\") " pod="calico-system/whisker-7dc658688-krcg4" Nov 4 23:56:29.665004 kubelet[2826]: I1104 23:56:29.664635 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef7f4d20-18d0-40c0-9e77-df43127da829-config-volume\") pod \"coredns-674b8bbfcf-rxbfs\" (UID: \"ef7f4d20-18d0-40c0-9e77-df43127da829\") " pod="kube-system/coredns-674b8bbfcf-rxbfs" Nov 4 23:56:29.665004 kubelet[2826]: I1104 23:56:29.664662 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wllh7\" (UniqueName: \"kubernetes.io/projected/ef7f4d20-18d0-40c0-9e77-df43127da829-kube-api-access-wllh7\") pod \"coredns-674b8bbfcf-rxbfs\" (UID: \"ef7f4d20-18d0-40c0-9e77-df43127da829\") " pod="kube-system/coredns-674b8bbfcf-rxbfs" Nov 4 23:56:29.665004 kubelet[2826]: I1104 23:56:29.664675 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w8gg\" (UniqueName: \"kubernetes.io/projected/931996dd-fd1b-4af9-a724-280dd54dbe3b-kube-api-access-6w8gg\") pod \"calico-apiserver-677647d65b-8x7v5\" (UID: \"931996dd-fd1b-4af9-a724-280dd54dbe3b\") " pod="calico-apiserver/calico-apiserver-677647d65b-8x7v5" Nov 4 23:56:29.665004 kubelet[2826]: I1104 23:56:29.664691 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2f84ec82-93eb-45f7-9d50-0aaa6ffeaaaa-calico-apiserver-certs\") pod \"calico-apiserver-677647d65b-vprpk\" (UID: \"2f84ec82-93eb-45f7-9d50-0aaa6ffeaaaa\") " pod="calico-apiserver/calico-apiserver-677647d65b-vprpk" Nov 4 23:56:29.665091 kubelet[2826]: I1104 23:56:29.664704 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dd6ec4e-4d6c-4ec9-b118-6c9da0be2ab2-config-volume\") pod \"coredns-674b8bbfcf-cjmfr\" (UID: \"2dd6ec4e-4d6c-4ec9-b118-6c9da0be2ab2\") " pod="kube-system/coredns-674b8bbfcf-cjmfr" Nov 4 23:56:29.665091 kubelet[2826]: I1104 23:56:29.664715 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af60ecf0-185c-46c3-9aed-e5c51cd74bb3-tigera-ca-bundle\") pod \"calico-kube-controllers-7b747dc6fd-58ml9\" (UID: \"af60ecf0-185c-46c3-9aed-e5c51cd74bb3\") " pod="calico-system/calico-kube-controllers-7b747dc6fd-58ml9" Nov 4 23:56:29.942771 containerd[1627]: time="2025-11-04T23:56:29.942730852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677647d65b-8x7v5,Uid:931996dd-fd1b-4af9-a724-280dd54dbe3b,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:56:29.943106 containerd[1627]: time="2025-11-04T23:56:29.943087396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dc658688-krcg4,Uid:8b968e43-5bbd-430c-9c38-670c7bbfd2f3,Namespace:calico-system,Attempt:0,}" Nov 4 23:56:29.943291 containerd[1627]: time="2025-11-04T23:56:29.943252634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rxbfs,Uid:ef7f4d20-18d0-40c0-9e77-df43127da829,Namespace:kube-system,Attempt:0,}" Nov 4 23:56:29.947858 containerd[1627]: time="2025-11-04T23:56:29.946943811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cjmfr,Uid:2dd6ec4e-4d6c-4ec9-b118-6c9da0be2ab2,Namespace:kube-system,Attempt:0,}" Nov 4 23:56:29.954858 containerd[1627]: time="2025-11-04T23:56:29.953912104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b747dc6fd-58ml9,Uid:af60ecf0-185c-46c3-9aed-e5c51cd74bb3,Namespace:calico-system,Attempt:0,}" Nov 4 23:56:29.957694 containerd[1627]: time="2025-11-04T23:56:29.957661891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pxcr4,Uid:13dd94ef-4602-4fcf-b36d-5a6661064a5d,Namespace:calico-system,Attempt:0,}" Nov 4 23:56:29.963138 containerd[1627]: time="2025-11-04T23:56:29.963117867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677647d65b-vprpk,Uid:2f84ec82-93eb-45f7-9d50-0aaa6ffeaaaa,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:56:30.104957 containerd[1627]: time="2025-11-04T23:56:30.104927470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 4 23:56:30.211404 containerd[1627]: time="2025-11-04T23:56:30.211020469Z" level=error msg="Failed to destroy network for sandbox \"8feff1dbdd039347deb9ef3111f4dbdbfb2084066035d32befa6be0e4940f2c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:30.215029 containerd[1627]: time="2025-11-04T23:56:30.215005324Z" level=error msg="Failed to destroy network for sandbox \"f6a71fcb005f92af05336df50681d8b83ad44bf446e0f59a435dd389bd9c72ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:30.215276 containerd[1627]: time="2025-11-04T23:56:30.215251182Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677647d65b-8x7v5,Uid:931996dd-fd1b-4af9-a724-280dd54dbe3b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8feff1dbdd039347deb9ef3111f4dbdbfb2084066035d32befa6be0e4940f2c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:30.218245 containerd[1627]: time="2025-11-04T23:56:30.218181995Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rxbfs,Uid:ef7f4d20-18d0-40c0-9e77-df43127da829,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6a71fcb005f92af05336df50681d8b83ad44bf446e0f59a435dd389bd9c72ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:30.226044 kubelet[2826]: E1104 23:56:30.225996 2826 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8feff1dbdd039347deb9ef3111f4dbdbfb2084066035d32befa6be0e4940f2c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:30.226284 kubelet[2826]: E1104 23:56:30.226071 2826 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8feff1dbdd039347deb9ef3111f4dbdbfb2084066035d32befa6be0e4940f2c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-677647d65b-8x7v5" Nov 4 23:56:30.226284 kubelet[2826]: E1104 23:56:30.226092 2826 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8feff1dbdd039347deb9ef3111f4dbdbfb2084066035d32befa6be0e4940f2c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-677647d65b-8x7v5" Nov 4 23:56:30.226284 kubelet[2826]: E1104 23:56:30.226135 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-677647d65b-8x7v5_calico-apiserver(931996dd-fd1b-4af9-a724-280dd54dbe3b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-677647d65b-8x7v5_calico-apiserver(931996dd-fd1b-4af9-a724-280dd54dbe3b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8feff1dbdd039347deb9ef3111f4dbdbfb2084066035d32befa6be0e4940f2c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-677647d65b-8x7v5" podUID="931996dd-fd1b-4af9-a724-280dd54dbe3b" Nov 4 23:56:30.227097 kubelet[2826]: E1104 23:56:30.226904 2826 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6a71fcb005f92af05336df50681d8b83ad44bf446e0f59a435dd389bd9c72ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:30.227097 kubelet[2826]: E1104 23:56:30.226952 2826 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6a71fcb005f92af05336df50681d8b83ad44bf446e0f59a435dd389bd9c72ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rxbfs" Nov 4 23:56:30.227097 kubelet[2826]: E1104 23:56:30.226970 2826 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6a71fcb005f92af05336df50681d8b83ad44bf446e0f59a435dd389bd9c72ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rxbfs" Nov 4 23:56:30.227188 kubelet[2826]: E1104 23:56:30.227007 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-rxbfs_kube-system(ef7f4d20-18d0-40c0-9e77-df43127da829)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-rxbfs_kube-system(ef7f4d20-18d0-40c0-9e77-df43127da829)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6a71fcb005f92af05336df50681d8b83ad44bf446e0f59a435dd389bd9c72ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-rxbfs" podUID="ef7f4d20-18d0-40c0-9e77-df43127da829" Nov 4 23:56:30.227941 containerd[1627]: time="2025-11-04T23:56:30.227910988Z" level=error msg="Failed to destroy network for sandbox \"d0cde83177023a3ea76283491a5bc8c6c3b92c928d506b284a810d5bfdb77348\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:30.228402 containerd[1627]: time="2025-11-04T23:56:30.228365014Z" level=error msg="Failed to destroy network for sandbox \"b6c4710c3304cf18b96cbcbaa76b76e70cfcf825d15d5397a2a454a69e8fb737\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:30.233413 containerd[1627]: time="2025-11-04T23:56:30.232807983Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cjmfr,Uid:2dd6ec4e-4d6c-4ec9-b118-6c9da0be2ab2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0cde83177023a3ea76283491a5bc8c6c3b92c928d506b284a810d5bfdb77348\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:30.233504 kubelet[2826]: E1104 23:56:30.233015 2826 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0cde83177023a3ea76283491a5bc8c6c3b92c928d506b284a810d5bfdb77348\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:30.233504 kubelet[2826]: E1104 23:56:30.233054 2826 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0cde83177023a3ea76283491a5bc8c6c3b92c928d506b284a810d5bfdb77348\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cjmfr" Nov 4 23:56:30.233504 kubelet[2826]: E1104 23:56:30.233079 2826 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0cde83177023a3ea76283491a5bc8c6c3b92c928d506b284a810d5bfdb77348\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cjmfr" Nov 4 23:56:30.233573 kubelet[2826]: E1104 23:56:30.233140 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-cjmfr_kube-system(2dd6ec4e-4d6c-4ec9-b118-6c9da0be2ab2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-cjmfr_kube-system(2dd6ec4e-4d6c-4ec9-b118-6c9da0be2ab2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0cde83177023a3ea76283491a5bc8c6c3b92c928d506b284a810d5bfdb77348\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-cjmfr" podUID="2dd6ec4e-4d6c-4ec9-b118-6c9da0be2ab2" Nov 4 23:56:30.234883 containerd[1627]: time="2025-11-04T23:56:30.234854889Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b747dc6fd-58ml9,Uid:af60ecf0-185c-46c3-9aed-e5c51cd74bb3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6c4710c3304cf18b96cbcbaa76b76e70cfcf825d15d5397a2a454a69e8fb737\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:30.235164 kubelet[2826]: E1104 23:56:30.235139 2826 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6c4710c3304cf18b96cbcbaa76b76e70cfcf825d15d5397a2a454a69e8fb737\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:30.235202 kubelet[2826]: E1104 23:56:30.235173 2826 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6c4710c3304cf18b96cbcbaa76b76e70cfcf825d15d5397a2a454a69e8fb737\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b747dc6fd-58ml9" Nov 4 23:56:30.235202 kubelet[2826]: E1104 23:56:30.235189 2826 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6c4710c3304cf18b96cbcbaa76b76e70cfcf825d15d5397a2a454a69e8fb737\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b747dc6fd-58ml9" Nov 4 23:56:30.235265 kubelet[2826]: E1104 23:56:30.235242 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b747dc6fd-58ml9_calico-system(af60ecf0-185c-46c3-9aed-e5c51cd74bb3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b747dc6fd-58ml9_calico-system(af60ecf0-185c-46c3-9aed-e5c51cd74bb3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6c4710c3304cf18b96cbcbaa76b76e70cfcf825d15d5397a2a454a69e8fb737\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b747dc6fd-58ml9" podUID="af60ecf0-185c-46c3-9aed-e5c51cd74bb3" Nov 4 23:56:30.238027 containerd[1627]: time="2025-11-04T23:56:30.238003346Z" level=error msg="Failed to destroy network for sandbox \"933cdab5c06b44a2289cd2eb0b4534c98b5524dc9e0a1137dc768c79c8f1875e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:30.240424 containerd[1627]: time="2025-11-04T23:56:30.240360058Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pxcr4,Uid:13dd94ef-4602-4fcf-b36d-5a6661064a5d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"933cdab5c06b44a2289cd2eb0b4534c98b5524dc9e0a1137dc768c79c8f1875e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:30.240779 kubelet[2826]: E1104 23:56:30.240757 2826 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"933cdab5c06b44a2289cd2eb0b4534c98b5524dc9e0a1137dc768c79c8f1875e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:30.241205 kubelet[2826]: E1104 23:56:30.240789 2826 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"933cdab5c06b44a2289cd2eb0b4534c98b5524dc9e0a1137dc768c79c8f1875e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-pxcr4" Nov 4 23:56:30.241205 kubelet[2826]: E1104 23:56:30.240804 2826 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"933cdab5c06b44a2289cd2eb0b4534c98b5524dc9e0a1137dc768c79c8f1875e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-pxcr4" Nov 4 23:56:30.241205 kubelet[2826]: E1104 23:56:30.240834 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-pxcr4_calico-system(13dd94ef-4602-4fcf-b36d-5a6661064a5d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-pxcr4_calico-system(13dd94ef-4602-4fcf-b36d-5a6661064a5d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"933cdab5c06b44a2289cd2eb0b4534c98b5524dc9e0a1137dc768c79c8f1875e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-pxcr4" podUID="13dd94ef-4602-4fcf-b36d-5a6661064a5d" Nov 4 23:56:30.244287 containerd[1627]: time="2025-11-04T23:56:30.244260018Z" level=error msg="Failed to destroy network for sandbox \"6f6262b29fabebd47e2c1c38baa3c92ff92987d66e940251ca1380c16448608f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:30.245376 containerd[1627]: time="2025-11-04T23:56:30.245357653Z" level=error msg="Failed to destroy network for sandbox \"5f1b91b728019d6a3625875b4403b045d4d133b69f5e661fd8c0c0c33aa38912\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:30.245529 containerd[1627]: time="2025-11-04T23:56:30.245415400Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dc658688-krcg4,Uid:8b968e43-5bbd-430c-9c38-670c7bbfd2f3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f6262b29fabebd47e2c1c38baa3c92ff92987d66e940251ca1380c16448608f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:30.245651 kubelet[2826]: E1104 23:56:30.245611 2826 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f6262b29fabebd47e2c1c38baa3c92ff92987d66e940251ca1380c16448608f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:30.245651 kubelet[2826]: E1104 23:56:30.245639 2826 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f6262b29fabebd47e2c1c38baa3c92ff92987d66e940251ca1380c16448608f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7dc658688-krcg4" Nov 4 23:56:30.245698 kubelet[2826]: E1104 23:56:30.245651 2826 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f6262b29fabebd47e2c1c38baa3c92ff92987d66e940251ca1380c16448608f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7dc658688-krcg4" Nov 4 23:56:30.245698 kubelet[2826]: E1104 23:56:30.245681 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7dc658688-krcg4_calico-system(8b968e43-5bbd-430c-9c38-670c7bbfd2f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7dc658688-krcg4_calico-system(8b968e43-5bbd-430c-9c38-670c7bbfd2f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f6262b29fabebd47e2c1c38baa3c92ff92987d66e940251ca1380c16448608f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7dc658688-krcg4" podUID="8b968e43-5bbd-430c-9c38-670c7bbfd2f3" Nov 4 23:56:30.246859 containerd[1627]: time="2025-11-04T23:56:30.246813405Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677647d65b-vprpk,Uid:2f84ec82-93eb-45f7-9d50-0aaa6ffeaaaa,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f1b91b728019d6a3625875b4403b045d4d133b69f5e661fd8c0c0c33aa38912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:30.247050 kubelet[2826]: E1104 23:56:30.247026 2826 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f1b91b728019d6a3625875b4403b045d4d133b69f5e661fd8c0c0c33aa38912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:30.247086 kubelet[2826]: E1104 23:56:30.247078 2826 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f1b91b728019d6a3625875b4403b045d4d133b69f5e661fd8c0c0c33aa38912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-677647d65b-vprpk" Nov 4 23:56:30.247116 kubelet[2826]: E1104 23:56:30.247092 2826 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f1b91b728019d6a3625875b4403b045d4d133b69f5e661fd8c0c0c33aa38912\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-677647d65b-vprpk" Nov 4 23:56:30.247414 kubelet[2826]: E1104 23:56:30.247164 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-677647d65b-vprpk_calico-apiserver(2f84ec82-93eb-45f7-9d50-0aaa6ffeaaaa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-677647d65b-vprpk_calico-apiserver(2f84ec82-93eb-45f7-9d50-0aaa6ffeaaaa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f1b91b728019d6a3625875b4403b045d4d133b69f5e661fd8c0c0c33aa38912\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-677647d65b-vprpk" podUID="2f84ec82-93eb-45f7-9d50-0aaa6ffeaaaa" Nov 4 23:56:30.882832 systemd[1]: run-netns-cni\x2d328f8971\x2dd9f8\x2d6bd5\x2d33a4\x2dcac709ad4fdf.mount: Deactivated successfully. Nov 4 23:56:30.883037 systemd[1]: run-netns-cni\x2d0e5826fc\x2dd52e\x2d69c4\x2df363\x2d5b880002d655.mount: Deactivated successfully. Nov 4 23:56:30.883191 systemd[1]: run-netns-cni\x2d7b6177b6\x2d004f\x2de4f3\x2d9a0e\x2d01724b20228f.mount: Deactivated successfully. Nov 4 23:56:30.883341 systemd[1]: run-netns-cni\x2d1efd1b33\x2dd885\x2dad4a\x2d0b4e\x2daacfddf84ec1.mount: Deactivated successfully. Nov 4 23:56:30.883520 systemd[1]: run-netns-cni\x2d1539b878\x2debc9\x2d22df\x2de0d5\x2d7d6a0f57f2af.mount: Deactivated successfully. Nov 4 23:56:30.977350 systemd[1]: Created slice kubepods-besteffort-pod380d0997_b155_4f76_994b_7e2911c8cbf8.slice - libcontainer container kubepods-besteffort-pod380d0997_b155_4f76_994b_7e2911c8cbf8.slice. Nov 4 23:56:30.981641 containerd[1627]: time="2025-11-04T23:56:30.981528831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4ms7h,Uid:380d0997-b155-4f76-994b-7e2911c8cbf8,Namespace:calico-system,Attempt:0,}" Nov 4 23:56:31.081059 containerd[1627]: time="2025-11-04T23:56:31.080961567Z" level=error msg="Failed to destroy network for sandbox \"1d088c8ab0b8f20a5594426be001e36fee1d081bb9f6451fa0278e357c85e1ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:31.085457 containerd[1627]: time="2025-11-04T23:56:31.085355229Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4ms7h,Uid:380d0997-b155-4f76-994b-7e2911c8cbf8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d088c8ab0b8f20a5594426be001e36fee1d081bb9f6451fa0278e357c85e1ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:31.085924 systemd[1]: run-netns-cni\x2d85026136\x2dbd7d\x2dd65d\x2d0c00\x2dfd5f4cb5c96c.mount: Deactivated successfully. Nov 4 23:56:31.087892 kubelet[2826]: E1104 23:56:31.087134 2826 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d088c8ab0b8f20a5594426be001e36fee1d081bb9f6451fa0278e357c85e1ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 4 23:56:31.087892 kubelet[2826]: E1104 23:56:31.087210 2826 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d088c8ab0b8f20a5594426be001e36fee1d081bb9f6451fa0278e357c85e1ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4ms7h" Nov 4 23:56:31.087892 kubelet[2826]: E1104 23:56:31.087253 2826 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d088c8ab0b8f20a5594426be001e36fee1d081bb9f6451fa0278e357c85e1ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4ms7h" Nov 4 23:56:31.088855 kubelet[2826]: E1104 23:56:31.088464 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4ms7h_calico-system(380d0997-b155-4f76-994b-7e2911c8cbf8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4ms7h_calico-system(380d0997-b155-4f76-994b-7e2911c8cbf8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d088c8ab0b8f20a5594426be001e36fee1d081bb9f6451fa0278e357c85e1ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4ms7h" podUID="380d0997-b155-4f76-994b-7e2911c8cbf8" Nov 4 23:56:34.263178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3444451522.mount: Deactivated successfully. Nov 4 23:56:34.309880 containerd[1627]: time="2025-11-04T23:56:34.300364709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:34.320634 containerd[1627]: time="2025-11-04T23:56:34.319988228Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:34.320812 containerd[1627]: time="2025-11-04T23:56:34.320785896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 4 23:56:34.322389 containerd[1627]: time="2025-11-04T23:56:34.322368629Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 23:56:34.324751 containerd[1627]: time="2025-11-04T23:56:34.324718162Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.210706057s" Nov 4 23:56:34.324804 containerd[1627]: time="2025-11-04T23:56:34.324780198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 4 23:56:34.343280 containerd[1627]: time="2025-11-04T23:56:34.343235037Z" level=info msg="CreateContainer within sandbox \"9d644951b87a9bb821c0c715d808612a15525df1f1e09c52de497a67621a13b4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 4 23:56:34.372911 containerd[1627]: time="2025-11-04T23:56:34.372807907Z" level=info msg="Container 8ca26977d6a5c9ca0214b35c97a6596546938c85b1e66f8c91991295a17af949: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:56:34.375515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1015100849.mount: Deactivated successfully. Nov 4 23:56:34.418685 containerd[1627]: time="2025-11-04T23:56:34.418646823Z" level=info msg="CreateContainer within sandbox \"9d644951b87a9bb821c0c715d808612a15525df1f1e09c52de497a67621a13b4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8ca26977d6a5c9ca0214b35c97a6596546938c85b1e66f8c91991295a17af949\"" Nov 4 23:56:34.420260 containerd[1627]: time="2025-11-04T23:56:34.419314358Z" level=info msg="StartContainer for \"8ca26977d6a5c9ca0214b35c97a6596546938c85b1e66f8c91991295a17af949\"" Nov 4 23:56:34.428634 containerd[1627]: time="2025-11-04T23:56:34.428594812Z" level=info msg="connecting to shim 8ca26977d6a5c9ca0214b35c97a6596546938c85b1e66f8c91991295a17af949" address="unix:///run/containerd/s/5cf78ca0d88f4142ec12f5820760ae4eff15e9022c475dba1932780704f30e49" protocol=ttrpc version=3 Nov 4 23:56:34.519680 systemd[1]: Started cri-containerd-8ca26977d6a5c9ca0214b35c97a6596546938c85b1e66f8c91991295a17af949.scope - libcontainer container 8ca26977d6a5c9ca0214b35c97a6596546938c85b1e66f8c91991295a17af949. Nov 4 23:56:34.607429 containerd[1627]: time="2025-11-04T23:56:34.607390769Z" level=info msg="StartContainer for \"8ca26977d6a5c9ca0214b35c97a6596546938c85b1e66f8c91991295a17af949\" returns successfully" Nov 4 23:56:34.691883 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 4 23:56:34.696575 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 4 23:56:35.006724 kubelet[2826]: I1104 23:56:35.005805 2826 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b968e43-5bbd-430c-9c38-670c7bbfd2f3-whisker-ca-bundle\") pod \"8b968e43-5bbd-430c-9c38-670c7bbfd2f3\" (UID: \"8b968e43-5bbd-430c-9c38-670c7bbfd2f3\") " Nov 4 23:56:35.006724 kubelet[2826]: I1104 23:56:35.005877 2826 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8b968e43-5bbd-430c-9c38-670c7bbfd2f3-whisker-backend-key-pair\") pod \"8b968e43-5bbd-430c-9c38-670c7bbfd2f3\" (UID: \"8b968e43-5bbd-430c-9c38-670c7bbfd2f3\") " Nov 4 23:56:35.006724 kubelet[2826]: I1104 23:56:35.005908 2826 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm6b9\" (UniqueName: \"kubernetes.io/projected/8b968e43-5bbd-430c-9c38-670c7bbfd2f3-kube-api-access-dm6b9\") pod \"8b968e43-5bbd-430c-9c38-670c7bbfd2f3\" (UID: \"8b968e43-5bbd-430c-9c38-670c7bbfd2f3\") " Nov 4 23:56:35.007870 kubelet[2826]: I1104 23:56:35.007844 2826 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b968e43-5bbd-430c-9c38-670c7bbfd2f3-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8b968e43-5bbd-430c-9c38-670c7bbfd2f3" (UID: "8b968e43-5bbd-430c-9c38-670c7bbfd2f3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 4 23:56:35.012706 kubelet[2826]: I1104 23:56:35.012647 2826 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b968e43-5bbd-430c-9c38-670c7bbfd2f3-kube-api-access-dm6b9" (OuterVolumeSpecName: "kube-api-access-dm6b9") pod "8b968e43-5bbd-430c-9c38-670c7bbfd2f3" (UID: "8b968e43-5bbd-430c-9c38-670c7bbfd2f3"). InnerVolumeSpecName "kube-api-access-dm6b9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 23:56:35.016894 kubelet[2826]: I1104 23:56:35.016868 2826 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b968e43-5bbd-430c-9c38-670c7bbfd2f3-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8b968e43-5bbd-430c-9c38-670c7bbfd2f3" (UID: "8b968e43-5bbd-430c-9c38-670c7bbfd2f3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 4 23:56:35.106305 kubelet[2826]: I1104 23:56:35.106238 2826 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dm6b9\" (UniqueName: \"kubernetes.io/projected/8b968e43-5bbd-430c-9c38-670c7bbfd2f3-kube-api-access-dm6b9\") on node \"ci-4487-0-0-n-1c2c5ddea4\" DevicePath \"\"" Nov 4 23:56:35.106305 kubelet[2826]: I1104 23:56:35.106268 2826 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b968e43-5bbd-430c-9c38-670c7bbfd2f3-whisker-ca-bundle\") on node \"ci-4487-0-0-n-1c2c5ddea4\" DevicePath \"\"" Nov 4 23:56:35.106305 kubelet[2826]: I1104 23:56:35.106280 2826 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8b968e43-5bbd-430c-9c38-670c7bbfd2f3-whisker-backend-key-pair\") on node \"ci-4487-0-0-n-1c2c5ddea4\" DevicePath \"\"" Nov 4 23:56:35.116965 systemd[1]: Removed slice kubepods-besteffort-pod8b968e43_5bbd_430c_9c38_670c7bbfd2f3.slice - libcontainer container kubepods-besteffort-pod8b968e43_5bbd_430c_9c38_670c7bbfd2f3.slice. Nov 4 23:56:35.136887 kubelet[2826]: I1104 23:56:35.133377 2826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6vhbh" podStartSLOduration=2.249742697 podStartE2EDuration="14.133362288s" podCreationTimestamp="2025-11-04 23:56:21 +0000 UTC" firstStartedPulling="2025-11-04 23:56:22.441923431 +0000 UTC m=+21.592662575" lastFinishedPulling="2025-11-04 23:56:34.325543052 +0000 UTC m=+33.476282166" observedRunningTime="2025-11-04 23:56:35.132695615 +0000 UTC m=+34.283434718" watchObservedRunningTime="2025-11-04 23:56:35.133362288 +0000 UTC m=+34.284101402" Nov 4 23:56:35.230794 systemd[1]: Created slice kubepods-besteffort-pod8bf957df_2cff_497d_912f_bd9de450b664.slice - libcontainer container kubepods-besteffort-pod8bf957df_2cff_497d_912f_bd9de450b664.slice. Nov 4 23:56:35.264400 systemd[1]: var-lib-kubelet-pods-8b968e43\x2d5bbd\x2d430c\x2d9c38\x2d670c7bbfd2f3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddm6b9.mount: Deactivated successfully. Nov 4 23:56:35.264790 systemd[1]: var-lib-kubelet-pods-8b968e43\x2d5bbd\x2d430c\x2d9c38\x2d670c7bbfd2f3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 4 23:56:35.290602 containerd[1627]: time="2025-11-04T23:56:35.290567558Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8ca26977d6a5c9ca0214b35c97a6596546938c85b1e66f8c91991295a17af949\" id:\"f1ce92ac5660355ad6ca1cdf35cce8e3531f293d22308a3fcf3eceaf72f68a8f\" pid:3874 exit_status:1 exited_at:{seconds:1762300595 nanos:289504835}" Nov 4 23:56:35.309013 kubelet[2826]: I1104 23:56:35.308968 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bf957df-2cff-497d-912f-bd9de450b664-whisker-ca-bundle\") pod \"whisker-79bffcdf98-xwjrq\" (UID: \"8bf957df-2cff-497d-912f-bd9de450b664\") " pod="calico-system/whisker-79bffcdf98-xwjrq" Nov 4 23:56:35.309274 kubelet[2826]: I1104 23:56:35.309216 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv9mg\" (UniqueName: \"kubernetes.io/projected/8bf957df-2cff-497d-912f-bd9de450b664-kube-api-access-bv9mg\") pod \"whisker-79bffcdf98-xwjrq\" (UID: \"8bf957df-2cff-497d-912f-bd9de450b664\") " pod="calico-system/whisker-79bffcdf98-xwjrq" Nov 4 23:56:35.309397 kubelet[2826]: I1104 23:56:35.309362 2826 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8bf957df-2cff-497d-912f-bd9de450b664-whisker-backend-key-pair\") pod \"whisker-79bffcdf98-xwjrq\" (UID: \"8bf957df-2cff-497d-912f-bd9de450b664\") " pod="calico-system/whisker-79bffcdf98-xwjrq" Nov 4 23:56:35.538680 containerd[1627]: time="2025-11-04T23:56:35.538176318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79bffcdf98-xwjrq,Uid:8bf957df-2cff-497d-912f-bd9de450b664,Namespace:calico-system,Attempt:0,}" Nov 4 23:56:35.946790 systemd-networkd[1526]: cali9ad9dab8ea3: Link UP Nov 4 23:56:35.951136 systemd-networkd[1526]: cali9ad9dab8ea3: Gained carrier Nov 4 23:56:35.979821 containerd[1627]: 2025-11-04 23:56:35.585 [INFO][3886] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 23:56:35.979821 containerd[1627]: 2025-11-04 23:56:35.628 [INFO][3886] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487--0--0--n--1c2c5ddea4-k8s-whisker--79bffcdf98--xwjrq-eth0 whisker-79bffcdf98- calico-system 8bf957df-2cff-497d-912f-bd9de450b664 911 0 2025-11-04 23:56:35 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:79bffcdf98 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4487-0-0-n-1c2c5ddea4 whisker-79bffcdf98-xwjrq eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9ad9dab8ea3 [] [] }} ContainerID="6a9241cde68a3626c6f894a4feccebb7cf4b40e1bae759559c74bc1d8c2d20aa" Namespace="calico-system" Pod="whisker-79bffcdf98-xwjrq" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-whisker--79bffcdf98--xwjrq-" Nov 4 23:56:35.979821 containerd[1627]: 2025-11-04 23:56:35.628 [INFO][3886] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6a9241cde68a3626c6f894a4feccebb7cf4b40e1bae759559c74bc1d8c2d20aa" Namespace="calico-system" Pod="whisker-79bffcdf98-xwjrq" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-whisker--79bffcdf98--xwjrq-eth0" Nov 4 23:56:35.979821 containerd[1627]: 2025-11-04 23:56:35.848 [INFO][3898] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a9241cde68a3626c6f894a4feccebb7cf4b40e1bae759559c74bc1d8c2d20aa" HandleID="k8s-pod-network.6a9241cde68a3626c6f894a4feccebb7cf4b40e1bae759559c74bc1d8c2d20aa" Workload="ci--4487--0--0--n--1c2c5ddea4-k8s-whisker--79bffcdf98--xwjrq-eth0" Nov 4 23:56:35.980808 containerd[1627]: 2025-11-04 23:56:35.852 [INFO][3898] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6a9241cde68a3626c6f894a4feccebb7cf4b40e1bae759559c74bc1d8c2d20aa" HandleID="k8s-pod-network.6a9241cde68a3626c6f894a4feccebb7cf4b40e1bae759559c74bc1d8c2d20aa" Workload="ci--4487--0--0--n--1c2c5ddea4-k8s-whisker--79bffcdf98--xwjrq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ec10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487-0-0-n-1c2c5ddea4", "pod":"whisker-79bffcdf98-xwjrq", "timestamp":"2025-11-04 23:56:35.848714361 +0000 UTC"}, Hostname:"ci-4487-0-0-n-1c2c5ddea4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:56:35.980808 containerd[1627]: 2025-11-04 23:56:35.852 [INFO][3898] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:56:35.980808 containerd[1627]: 2025-11-04 23:56:35.853 [INFO][3898] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:56:35.980808 containerd[1627]: 2025-11-04 23:56:35.854 [INFO][3898] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487-0-0-n-1c2c5ddea4' Nov 4 23:56:35.980808 containerd[1627]: 2025-11-04 23:56:35.874 [INFO][3898] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6a9241cde68a3626c6f894a4feccebb7cf4b40e1bae759559c74bc1d8c2d20aa" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:35.980808 containerd[1627]: 2025-11-04 23:56:35.887 [INFO][3898] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:35.980808 containerd[1627]: 2025-11-04 23:56:35.897 [INFO][3898] ipam/ipam.go 511: Trying affinity for 192.168.97.0/26 host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:35.980808 containerd[1627]: 2025-11-04 23:56:35.899 [INFO][3898] ipam/ipam.go 158: Attempting to load block cidr=192.168.97.0/26 host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:35.980808 containerd[1627]: 2025-11-04 23:56:35.902 [INFO][3898] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.97.0/26 host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:35.982305 containerd[1627]: 2025-11-04 23:56:35.902 [INFO][3898] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.97.0/26 handle="k8s-pod-network.6a9241cde68a3626c6f894a4feccebb7cf4b40e1bae759559c74bc1d8c2d20aa" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:35.982305 containerd[1627]: 2025-11-04 23:56:35.905 [INFO][3898] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6a9241cde68a3626c6f894a4feccebb7cf4b40e1bae759559c74bc1d8c2d20aa Nov 4 23:56:35.982305 containerd[1627]: 2025-11-04 23:56:35.911 [INFO][3898] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.97.0/26 handle="k8s-pod-network.6a9241cde68a3626c6f894a4feccebb7cf4b40e1bae759559c74bc1d8c2d20aa" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:35.982305 containerd[1627]: 2025-11-04 23:56:35.919 [INFO][3898] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.97.1/26] block=192.168.97.0/26 handle="k8s-pod-network.6a9241cde68a3626c6f894a4feccebb7cf4b40e1bae759559c74bc1d8c2d20aa" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:35.982305 containerd[1627]: 2025-11-04 23:56:35.919 [INFO][3898] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.97.1/26] handle="k8s-pod-network.6a9241cde68a3626c6f894a4feccebb7cf4b40e1bae759559c74bc1d8c2d20aa" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:35.982305 containerd[1627]: 2025-11-04 23:56:35.919 [INFO][3898] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:56:35.982305 containerd[1627]: 2025-11-04 23:56:35.919 [INFO][3898] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.97.1/26] IPv6=[] ContainerID="6a9241cde68a3626c6f894a4feccebb7cf4b40e1bae759559c74bc1d8c2d20aa" HandleID="k8s-pod-network.6a9241cde68a3626c6f894a4feccebb7cf4b40e1bae759559c74bc1d8c2d20aa" Workload="ci--4487--0--0--n--1c2c5ddea4-k8s-whisker--79bffcdf98--xwjrq-eth0" Nov 4 23:56:35.983060 containerd[1627]: 2025-11-04 23:56:35.924 [INFO][3886] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6a9241cde68a3626c6f894a4feccebb7cf4b40e1bae759559c74bc1d8c2d20aa" Namespace="calico-system" Pod="whisker-79bffcdf98-xwjrq" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-whisker--79bffcdf98--xwjrq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--0--n--1c2c5ddea4-k8s-whisker--79bffcdf98--xwjrq-eth0", GenerateName:"whisker-79bffcdf98-", Namespace:"calico-system", SelfLink:"", UID:"8bf957df-2cff-497d-912f-bd9de450b664", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 56, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79bffcdf98", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-0-n-1c2c5ddea4", ContainerID:"", Pod:"whisker-79bffcdf98-xwjrq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.97.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9ad9dab8ea3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:35.983060 containerd[1627]: 2025-11-04 23:56:35.924 [INFO][3886] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.1/32] ContainerID="6a9241cde68a3626c6f894a4feccebb7cf4b40e1bae759559c74bc1d8c2d20aa" Namespace="calico-system" Pod="whisker-79bffcdf98-xwjrq" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-whisker--79bffcdf98--xwjrq-eth0" Nov 4 23:56:35.984197 containerd[1627]: 2025-11-04 23:56:35.924 [INFO][3886] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ad9dab8ea3 ContainerID="6a9241cde68a3626c6f894a4feccebb7cf4b40e1bae759559c74bc1d8c2d20aa" Namespace="calico-system" Pod="whisker-79bffcdf98-xwjrq" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-whisker--79bffcdf98--xwjrq-eth0" Nov 4 23:56:35.984197 containerd[1627]: 2025-11-04 23:56:35.940 [INFO][3886] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a9241cde68a3626c6f894a4feccebb7cf4b40e1bae759559c74bc1d8c2d20aa" Namespace="calico-system" Pod="whisker-79bffcdf98-xwjrq" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-whisker--79bffcdf98--xwjrq-eth0" Nov 4 23:56:35.984278 containerd[1627]: 2025-11-04 23:56:35.941 [INFO][3886] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6a9241cde68a3626c6f894a4feccebb7cf4b40e1bae759559c74bc1d8c2d20aa" Namespace="calico-system" Pod="whisker-79bffcdf98-xwjrq" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-whisker--79bffcdf98--xwjrq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--0--n--1c2c5ddea4-k8s-whisker--79bffcdf98--xwjrq-eth0", GenerateName:"whisker-79bffcdf98-", Namespace:"calico-system", SelfLink:"", UID:"8bf957df-2cff-497d-912f-bd9de450b664", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 56, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79bffcdf98", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-0-n-1c2c5ddea4", ContainerID:"6a9241cde68a3626c6f894a4feccebb7cf4b40e1bae759559c74bc1d8c2d20aa", Pod:"whisker-79bffcdf98-xwjrq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.97.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9ad9dab8ea3", MAC:"86:84:40:c4:82:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:35.984381 containerd[1627]: 2025-11-04 23:56:35.973 [INFO][3886] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6a9241cde68a3626c6f894a4feccebb7cf4b40e1bae759559c74bc1d8c2d20aa" Namespace="calico-system" Pod="whisker-79bffcdf98-xwjrq" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-whisker--79bffcdf98--xwjrq-eth0" Nov 4 23:56:36.189857 containerd[1627]: time="2025-11-04T23:56:36.189428821Z" level=info msg="connecting to shim 6a9241cde68a3626c6f894a4feccebb7cf4b40e1bae759559c74bc1d8c2d20aa" address="unix:///run/containerd/s/b9cbe8cf3959190d0ecf532430e48a57318958f88247c24719f07be127a52a3f" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:36.232809 systemd[1]: Started cri-containerd-6a9241cde68a3626c6f894a4feccebb7cf4b40e1bae759559c74bc1d8c2d20aa.scope - libcontainer container 6a9241cde68a3626c6f894a4feccebb7cf4b40e1bae759559c74bc1d8c2d20aa. Nov 4 23:56:36.337621 containerd[1627]: time="2025-11-04T23:56:36.337586353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79bffcdf98-xwjrq,Uid:8bf957df-2cff-497d-912f-bd9de450b664,Namespace:calico-system,Attempt:0,} returns sandbox id \"6a9241cde68a3626c6f894a4feccebb7cf4b40e1bae759559c74bc1d8c2d20aa\"" Nov 4 23:56:36.342029 containerd[1627]: time="2025-11-04T23:56:36.342004148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 23:56:36.486592 containerd[1627]: time="2025-11-04T23:56:36.486345646Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8ca26977d6a5c9ca0214b35c97a6596546938c85b1e66f8c91991295a17af949\" id:\"7c584e092abf7e7ca8bde62dd02f51309183726e736a3cfa3f9b74135dda0b0c\" pid:3992 exit_status:1 exited_at:{seconds:1762300596 nanos:486105499}" Nov 4 23:56:36.821597 containerd[1627]: time="2025-11-04T23:56:36.821535333Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:36.823074 containerd[1627]: time="2025-11-04T23:56:36.823018862Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 23:56:36.823626 containerd[1627]: time="2025-11-04T23:56:36.823284186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 4 23:56:36.824143 kubelet[2826]: E1104 23:56:36.823512 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:56:36.824143 kubelet[2826]: E1104 23:56:36.823572 2826 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:56:36.838242 kubelet[2826]: E1104 23:56:36.838186 2826 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a1a393e6146d446cbeef71df668a987a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bv9mg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79bffcdf98-xwjrq_calico-system(8bf957df-2cff-497d-912f-bd9de450b664): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:36.840556 containerd[1627]: time="2025-11-04T23:56:36.840353543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 23:56:36.966106 kubelet[2826]: I1104 23:56:36.966058 2826 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b968e43-5bbd-430c-9c38-670c7bbfd2f3" path="/var/lib/kubelet/pods/8b968e43-5bbd-430c-9c38-670c7bbfd2f3/volumes" Nov 4 23:56:37.256537 containerd[1627]: time="2025-11-04T23:56:37.256356111Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:37.259233 containerd[1627]: time="2025-11-04T23:56:37.259135279Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 23:56:37.259439 containerd[1627]: time="2025-11-04T23:56:37.259258669Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 4 23:56:37.259620 kubelet[2826]: E1104 23:56:37.259557 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:56:37.259763 kubelet[2826]: E1104 23:56:37.259631 2826 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:56:37.259923 kubelet[2826]: E1104 23:56:37.259805 2826 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bv9mg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79bffcdf98-xwjrq_calico-system(8bf957df-2cff-497d-912f-bd9de450b664): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:37.261371 kubelet[2826]: E1104 23:56:37.261279 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79bffcdf98-xwjrq" podUID="8bf957df-2cff-497d-912f-bd9de450b664" Nov 4 23:56:37.888889 systemd-networkd[1526]: cali9ad9dab8ea3: Gained IPv6LL Nov 4 23:56:38.153358 kubelet[2826]: E1104 23:56:38.153193 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79bffcdf98-xwjrq" podUID="8bf957df-2cff-497d-912f-bd9de450b664" Nov 4 23:56:41.964455 containerd[1627]: time="2025-11-04T23:56:41.964355144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rxbfs,Uid:ef7f4d20-18d0-40c0-9e77-df43127da829,Namespace:kube-system,Attempt:0,}" Nov 4 23:56:41.964771 containerd[1627]: time="2025-11-04T23:56:41.964560367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b747dc6fd-58ml9,Uid:af60ecf0-185c-46c3-9aed-e5c51cd74bb3,Namespace:calico-system,Attempt:0,}" Nov 4 23:56:42.100261 systemd-networkd[1526]: calie091a8f32da: Link UP Nov 4 23:56:42.100500 systemd-networkd[1526]: calie091a8f32da: Gained carrier Nov 4 23:56:42.122017 containerd[1627]: 2025-11-04 23:56:42.012 [INFO][4180] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 23:56:42.122017 containerd[1627]: 2025-11-04 23:56:42.025 [INFO][4180] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487--0--0--n--1c2c5ddea4-k8s-coredns--674b8bbfcf--rxbfs-eth0 coredns-674b8bbfcf- kube-system ef7f4d20-18d0-40c0-9e77-df43127da829 843 0 2025-11-04 23:56:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4487-0-0-n-1c2c5ddea4 coredns-674b8bbfcf-rxbfs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie091a8f32da [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="73ff09609b8601870b8d9a879b863b27c39b7d03f98a754a2a15d41d855fddee" Namespace="kube-system" Pod="coredns-674b8bbfcf-rxbfs" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-coredns--674b8bbfcf--rxbfs-" Nov 4 23:56:42.122017 containerd[1627]: 2025-11-04 23:56:42.025 [INFO][4180] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="73ff09609b8601870b8d9a879b863b27c39b7d03f98a754a2a15d41d855fddee" Namespace="kube-system" Pod="coredns-674b8bbfcf-rxbfs" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-coredns--674b8bbfcf--rxbfs-eth0" Nov 4 23:56:42.122017 containerd[1627]: 2025-11-04 23:56:42.061 [INFO][4207] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="73ff09609b8601870b8d9a879b863b27c39b7d03f98a754a2a15d41d855fddee" HandleID="k8s-pod-network.73ff09609b8601870b8d9a879b863b27c39b7d03f98a754a2a15d41d855fddee" Workload="ci--4487--0--0--n--1c2c5ddea4-k8s-coredns--674b8bbfcf--rxbfs-eth0" Nov 4 23:56:42.122200 containerd[1627]: 2025-11-04 23:56:42.062 [INFO][4207] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="73ff09609b8601870b8d9a879b863b27c39b7d03f98a754a2a15d41d855fddee" HandleID="k8s-pod-network.73ff09609b8601870b8d9a879b863b27c39b7d03f98a754a2a15d41d855fddee" Workload="ci--4487--0--0--n--1c2c5ddea4-k8s-coredns--674b8bbfcf--rxbfs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f900), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4487-0-0-n-1c2c5ddea4", "pod":"coredns-674b8bbfcf-rxbfs", "timestamp":"2025-11-04 23:56:42.061912172 +0000 UTC"}, Hostname:"ci-4487-0-0-n-1c2c5ddea4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:56:42.122200 containerd[1627]: 2025-11-04 23:56:42.062 [INFO][4207] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:56:42.122200 containerd[1627]: 2025-11-04 23:56:42.062 [INFO][4207] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:56:42.122200 containerd[1627]: 2025-11-04 23:56:42.062 [INFO][4207] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487-0-0-n-1c2c5ddea4' Nov 4 23:56:42.122200 containerd[1627]: 2025-11-04 23:56:42.068 [INFO][4207] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.73ff09609b8601870b8d9a879b863b27c39b7d03f98a754a2a15d41d855fddee" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:42.122200 containerd[1627]: 2025-11-04 23:56:42.072 [INFO][4207] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:42.122200 containerd[1627]: 2025-11-04 23:56:42.076 [INFO][4207] ipam/ipam.go 511: Trying affinity for 192.168.97.0/26 host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:42.122200 containerd[1627]: 2025-11-04 23:56:42.077 [INFO][4207] ipam/ipam.go 158: Attempting to load block cidr=192.168.97.0/26 host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:42.122200 containerd[1627]: 2025-11-04 23:56:42.079 [INFO][4207] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.97.0/26 host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:42.122787 containerd[1627]: 2025-11-04 23:56:42.079 [INFO][4207] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.97.0/26 handle="k8s-pod-network.73ff09609b8601870b8d9a879b863b27c39b7d03f98a754a2a15d41d855fddee" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:42.122787 containerd[1627]: 2025-11-04 23:56:42.080 [INFO][4207] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.73ff09609b8601870b8d9a879b863b27c39b7d03f98a754a2a15d41d855fddee Nov 4 23:56:42.122787 containerd[1627]: 2025-11-04 23:56:42.085 [INFO][4207] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.97.0/26 handle="k8s-pod-network.73ff09609b8601870b8d9a879b863b27c39b7d03f98a754a2a15d41d855fddee" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:42.122787 containerd[1627]: 2025-11-04 23:56:42.089 [INFO][4207] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.97.2/26] block=192.168.97.0/26 handle="k8s-pod-network.73ff09609b8601870b8d9a879b863b27c39b7d03f98a754a2a15d41d855fddee" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:42.122787 containerd[1627]: 2025-11-04 23:56:42.089 [INFO][4207] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.97.2/26] handle="k8s-pod-network.73ff09609b8601870b8d9a879b863b27c39b7d03f98a754a2a15d41d855fddee" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:42.122787 containerd[1627]: 2025-11-04 23:56:42.089 [INFO][4207] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:56:42.122787 containerd[1627]: 2025-11-04 23:56:42.089 [INFO][4207] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.97.2/26] IPv6=[] ContainerID="73ff09609b8601870b8d9a879b863b27c39b7d03f98a754a2a15d41d855fddee" HandleID="k8s-pod-network.73ff09609b8601870b8d9a879b863b27c39b7d03f98a754a2a15d41d855fddee" Workload="ci--4487--0--0--n--1c2c5ddea4-k8s-coredns--674b8bbfcf--rxbfs-eth0" Nov 4 23:56:42.122906 containerd[1627]: 2025-11-04 23:56:42.091 [INFO][4180] cni-plugin/k8s.go 418: Populated endpoint ContainerID="73ff09609b8601870b8d9a879b863b27c39b7d03f98a754a2a15d41d855fddee" Namespace="kube-system" Pod="coredns-674b8bbfcf-rxbfs" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-coredns--674b8bbfcf--rxbfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--0--n--1c2c5ddea4-k8s-coredns--674b8bbfcf--rxbfs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ef7f4d20-18d0-40c0-9e77-df43127da829", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-0-n-1c2c5ddea4", ContainerID:"", Pod:"coredns-674b8bbfcf-rxbfs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie091a8f32da", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:42.122906 containerd[1627]: 2025-11-04 23:56:42.091 [INFO][4180] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.2/32] ContainerID="73ff09609b8601870b8d9a879b863b27c39b7d03f98a754a2a15d41d855fddee" Namespace="kube-system" Pod="coredns-674b8bbfcf-rxbfs" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-coredns--674b8bbfcf--rxbfs-eth0" Nov 4 23:56:42.122906 containerd[1627]: 2025-11-04 23:56:42.091 [INFO][4180] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie091a8f32da ContainerID="73ff09609b8601870b8d9a879b863b27c39b7d03f98a754a2a15d41d855fddee" Namespace="kube-system" Pod="coredns-674b8bbfcf-rxbfs" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-coredns--674b8bbfcf--rxbfs-eth0" Nov 4 23:56:42.122906 containerd[1627]: 2025-11-04 23:56:42.101 [INFO][4180] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="73ff09609b8601870b8d9a879b863b27c39b7d03f98a754a2a15d41d855fddee" Namespace="kube-system" Pod="coredns-674b8bbfcf-rxbfs" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-coredns--674b8bbfcf--rxbfs-eth0" Nov 4 23:56:42.122906 containerd[1627]: 2025-11-04 23:56:42.101 [INFO][4180] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="73ff09609b8601870b8d9a879b863b27c39b7d03f98a754a2a15d41d855fddee" Namespace="kube-system" Pod="coredns-674b8bbfcf-rxbfs" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-coredns--674b8bbfcf--rxbfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--0--n--1c2c5ddea4-k8s-coredns--674b8bbfcf--rxbfs-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ef7f4d20-18d0-40c0-9e77-df43127da829", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-0-n-1c2c5ddea4", ContainerID:"73ff09609b8601870b8d9a879b863b27c39b7d03f98a754a2a15d41d855fddee", Pod:"coredns-674b8bbfcf-rxbfs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie091a8f32da", MAC:"62:86:d0:b0:ba:c0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:42.122906 containerd[1627]: 2025-11-04 23:56:42.117 [INFO][4180] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="73ff09609b8601870b8d9a879b863b27c39b7d03f98a754a2a15d41d855fddee" Namespace="kube-system" Pod="coredns-674b8bbfcf-rxbfs" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-coredns--674b8bbfcf--rxbfs-eth0" Nov 4 23:56:42.140523 containerd[1627]: time="2025-11-04T23:56:42.140419622Z" level=info msg="connecting to shim 73ff09609b8601870b8d9a879b863b27c39b7d03f98a754a2a15d41d855fddee" address="unix:///run/containerd/s/239e7f6005bae4c4bf92e0613e1a90f003267b437ccd66584173d9182c122d23" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:42.163628 systemd[1]: Started cri-containerd-73ff09609b8601870b8d9a879b863b27c39b7d03f98a754a2a15d41d855fddee.scope - libcontainer container 73ff09609b8601870b8d9a879b863b27c39b7d03f98a754a2a15d41d855fddee. Nov 4 23:56:42.215526 systemd-networkd[1526]: cali60eb23f9707: Link UP Nov 4 23:56:42.216861 systemd-networkd[1526]: cali60eb23f9707: Gained carrier Nov 4 23:56:42.230003 containerd[1627]: 2025-11-04 23:56:42.005 [INFO][4190] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 23:56:42.230003 containerd[1627]: 2025-11-04 23:56:42.020 [INFO][4190] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487--0--0--n--1c2c5ddea4-k8s-calico--kube--controllers--7b747dc6fd--58ml9-eth0 calico-kube-controllers-7b747dc6fd- calico-system af60ecf0-185c-46c3-9aed-e5c51cd74bb3 846 0 2025-11-04 23:56:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b747dc6fd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4487-0-0-n-1c2c5ddea4 calico-kube-controllers-7b747dc6fd-58ml9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali60eb23f9707 [] [] }} ContainerID="c6425352f70dd93ba98bcfd919d94bfd4f79777969069c629e9538829911a4cd" Namespace="calico-system" Pod="calico-kube-controllers-7b747dc6fd-58ml9" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-calico--kube--controllers--7b747dc6fd--58ml9-" Nov 4 23:56:42.230003 containerd[1627]: 2025-11-04 23:56:42.020 [INFO][4190] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c6425352f70dd93ba98bcfd919d94bfd4f79777969069c629e9538829911a4cd" Namespace="calico-system" Pod="calico-kube-controllers-7b747dc6fd-58ml9" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-calico--kube--controllers--7b747dc6fd--58ml9-eth0" Nov 4 23:56:42.230003 containerd[1627]: 2025-11-04 23:56:42.065 [INFO][4205] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6425352f70dd93ba98bcfd919d94bfd4f79777969069c629e9538829911a4cd" HandleID="k8s-pod-network.c6425352f70dd93ba98bcfd919d94bfd4f79777969069c629e9538829911a4cd" Workload="ci--4487--0--0--n--1c2c5ddea4-k8s-calico--kube--controllers--7b747dc6fd--58ml9-eth0" Nov 4 23:56:42.230003 containerd[1627]: 2025-11-04 23:56:42.066 [INFO][4205] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c6425352f70dd93ba98bcfd919d94bfd4f79777969069c629e9538829911a4cd" HandleID="k8s-pod-network.c6425352f70dd93ba98bcfd919d94bfd4f79777969069c629e9538829911a4cd" Workload="ci--4487--0--0--n--1c2c5ddea4-k8s-calico--kube--controllers--7b747dc6fd--58ml9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f7f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487-0-0-n-1c2c5ddea4", "pod":"calico-kube-controllers-7b747dc6fd-58ml9", "timestamp":"2025-11-04 23:56:42.065838657 +0000 UTC"}, Hostname:"ci-4487-0-0-n-1c2c5ddea4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:56:42.230003 containerd[1627]: 2025-11-04 23:56:42.066 [INFO][4205] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:56:42.230003 containerd[1627]: 2025-11-04 23:56:42.089 [INFO][4205] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:56:42.230003 containerd[1627]: 2025-11-04 23:56:42.089 [INFO][4205] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487-0-0-n-1c2c5ddea4' Nov 4 23:56:42.230003 containerd[1627]: 2025-11-04 23:56:42.170 [INFO][4205] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c6425352f70dd93ba98bcfd919d94bfd4f79777969069c629e9538829911a4cd" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:42.230003 containerd[1627]: 2025-11-04 23:56:42.177 [INFO][4205] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:42.230003 containerd[1627]: 2025-11-04 23:56:42.181 [INFO][4205] ipam/ipam.go 511: Trying affinity for 192.168.97.0/26 host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:42.230003 containerd[1627]: 2025-11-04 23:56:42.184 [INFO][4205] ipam/ipam.go 158: Attempting to load block cidr=192.168.97.0/26 host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:42.230003 containerd[1627]: 2025-11-04 23:56:42.187 [INFO][4205] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.97.0/26 host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:42.230003 containerd[1627]: 2025-11-04 23:56:42.187 [INFO][4205] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.97.0/26 handle="k8s-pod-network.c6425352f70dd93ba98bcfd919d94bfd4f79777969069c629e9538829911a4cd" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:42.230003 containerd[1627]: 2025-11-04 23:56:42.189 [INFO][4205] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c6425352f70dd93ba98bcfd919d94bfd4f79777969069c629e9538829911a4cd Nov 4 23:56:42.230003 containerd[1627]: 2025-11-04 23:56:42.194 [INFO][4205] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.97.0/26 handle="k8s-pod-network.c6425352f70dd93ba98bcfd919d94bfd4f79777969069c629e9538829911a4cd" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:42.230003 containerd[1627]: 2025-11-04 23:56:42.203 [INFO][4205] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.97.3/26] block=192.168.97.0/26 handle="k8s-pod-network.c6425352f70dd93ba98bcfd919d94bfd4f79777969069c629e9538829911a4cd" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:42.230003 containerd[1627]: 2025-11-04 23:56:42.203 [INFO][4205] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.97.3/26] handle="k8s-pod-network.c6425352f70dd93ba98bcfd919d94bfd4f79777969069c629e9538829911a4cd" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:42.230003 containerd[1627]: 2025-11-04 23:56:42.203 [INFO][4205] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:56:42.230003 containerd[1627]: 2025-11-04 23:56:42.203 [INFO][4205] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.97.3/26] IPv6=[] ContainerID="c6425352f70dd93ba98bcfd919d94bfd4f79777969069c629e9538829911a4cd" HandleID="k8s-pod-network.c6425352f70dd93ba98bcfd919d94bfd4f79777969069c629e9538829911a4cd" Workload="ci--4487--0--0--n--1c2c5ddea4-k8s-calico--kube--controllers--7b747dc6fd--58ml9-eth0" Nov 4 23:56:42.231835 containerd[1627]: 2025-11-04 23:56:42.209 [INFO][4190] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c6425352f70dd93ba98bcfd919d94bfd4f79777969069c629e9538829911a4cd" Namespace="calico-system" Pod="calico-kube-controllers-7b747dc6fd-58ml9" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-calico--kube--controllers--7b747dc6fd--58ml9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--0--n--1c2c5ddea4-k8s-calico--kube--controllers--7b747dc6fd--58ml9-eth0", GenerateName:"calico-kube-controllers-7b747dc6fd-", Namespace:"calico-system", SelfLink:"", UID:"af60ecf0-185c-46c3-9aed-e5c51cd74bb3", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 56, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b747dc6fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-0-n-1c2c5ddea4", ContainerID:"", Pod:"calico-kube-controllers-7b747dc6fd-58ml9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali60eb23f9707", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:42.231835 containerd[1627]: 2025-11-04 23:56:42.209 [INFO][4190] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.3/32] ContainerID="c6425352f70dd93ba98bcfd919d94bfd4f79777969069c629e9538829911a4cd" Namespace="calico-system" Pod="calico-kube-controllers-7b747dc6fd-58ml9" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-calico--kube--controllers--7b747dc6fd--58ml9-eth0" Nov 4 23:56:42.231835 containerd[1627]: 2025-11-04 23:56:42.209 [INFO][4190] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60eb23f9707 ContainerID="c6425352f70dd93ba98bcfd919d94bfd4f79777969069c629e9538829911a4cd" Namespace="calico-system" Pod="calico-kube-controllers-7b747dc6fd-58ml9" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-calico--kube--controllers--7b747dc6fd--58ml9-eth0" Nov 4 23:56:42.231835 containerd[1627]: 2025-11-04 23:56:42.217 [INFO][4190] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c6425352f70dd93ba98bcfd919d94bfd4f79777969069c629e9538829911a4cd" Namespace="calico-system" Pod="calico-kube-controllers-7b747dc6fd-58ml9" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-calico--kube--controllers--7b747dc6fd--58ml9-eth0" Nov 4 23:56:42.231835 containerd[1627]: 2025-11-04 23:56:42.218 [INFO][4190] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c6425352f70dd93ba98bcfd919d94bfd4f79777969069c629e9538829911a4cd" Namespace="calico-system" Pod="calico-kube-controllers-7b747dc6fd-58ml9" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-calico--kube--controllers--7b747dc6fd--58ml9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--0--n--1c2c5ddea4-k8s-calico--kube--controllers--7b747dc6fd--58ml9-eth0", GenerateName:"calico-kube-controllers-7b747dc6fd-", Namespace:"calico-system", SelfLink:"", UID:"af60ecf0-185c-46c3-9aed-e5c51cd74bb3", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 56, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b747dc6fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-0-n-1c2c5ddea4", ContainerID:"c6425352f70dd93ba98bcfd919d94bfd4f79777969069c629e9538829911a4cd", Pod:"calico-kube-controllers-7b747dc6fd-58ml9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali60eb23f9707", MAC:"be:3d:8e:5d:e4:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:42.231835 containerd[1627]: 2025-11-04 23:56:42.227 [INFO][4190] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c6425352f70dd93ba98bcfd919d94bfd4f79777969069c629e9538829911a4cd" Namespace="calico-system" Pod="calico-kube-controllers-7b747dc6fd-58ml9" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-calico--kube--controllers--7b747dc6fd--58ml9-eth0" Nov 4 23:56:42.231835 containerd[1627]: time="2025-11-04T23:56:42.231056782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rxbfs,Uid:ef7f4d20-18d0-40c0-9e77-df43127da829,Namespace:kube-system,Attempt:0,} returns sandbox id \"73ff09609b8601870b8d9a879b863b27c39b7d03f98a754a2a15d41d855fddee\"" Nov 4 23:56:42.236295 containerd[1627]: time="2025-11-04T23:56:42.236276513Z" level=info msg="CreateContainer within sandbox \"73ff09609b8601870b8d9a879b863b27c39b7d03f98a754a2a15d41d855fddee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 23:56:42.251910 containerd[1627]: time="2025-11-04T23:56:42.251468353Z" level=info msg="connecting to shim c6425352f70dd93ba98bcfd919d94bfd4f79777969069c629e9538829911a4cd" address="unix:///run/containerd/s/1e2fdecad2e35dafcc60bc6eebd2a4e47de2e8375e24f2998413e6e2ee05069f" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:42.256371 containerd[1627]: time="2025-11-04T23:56:42.256323032Z" level=info msg="Container 2254bec821226c0c4a0a69e5ac22876bb1e515653fec3156ba4da03a4ddc2fa0: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:56:42.262099 containerd[1627]: time="2025-11-04T23:56:42.262074888Z" level=info msg="CreateContainer within sandbox \"73ff09609b8601870b8d9a879b863b27c39b7d03f98a754a2a15d41d855fddee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2254bec821226c0c4a0a69e5ac22876bb1e515653fec3156ba4da03a4ddc2fa0\"" Nov 4 23:56:42.262656 containerd[1627]: time="2025-11-04T23:56:42.262635615Z" level=info msg="StartContainer for \"2254bec821226c0c4a0a69e5ac22876bb1e515653fec3156ba4da03a4ddc2fa0\"" Nov 4 23:56:42.266200 containerd[1627]: time="2025-11-04T23:56:42.266022962Z" level=info msg="connecting to shim 2254bec821226c0c4a0a69e5ac22876bb1e515653fec3156ba4da03a4ddc2fa0" address="unix:///run/containerd/s/239e7f6005bae4c4bf92e0613e1a90f003267b437ccd66584173d9182c122d23" protocol=ttrpc version=3 Nov 4 23:56:42.279833 systemd[1]: Started cri-containerd-c6425352f70dd93ba98bcfd919d94bfd4f79777969069c629e9538829911a4cd.scope - libcontainer container c6425352f70dd93ba98bcfd919d94bfd4f79777969069c629e9538829911a4cd. Nov 4 23:56:42.283737 systemd[1]: Started cri-containerd-2254bec821226c0c4a0a69e5ac22876bb1e515653fec3156ba4da03a4ddc2fa0.scope - libcontainer container 2254bec821226c0c4a0a69e5ac22876bb1e515653fec3156ba4da03a4ddc2fa0. Nov 4 23:56:42.320415 containerd[1627]: time="2025-11-04T23:56:42.320341064Z" level=info msg="StartContainer for \"2254bec821226c0c4a0a69e5ac22876bb1e515653fec3156ba4da03a4ddc2fa0\" returns successfully" Nov 4 23:56:42.347379 containerd[1627]: time="2025-11-04T23:56:42.347316558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b747dc6fd-58ml9,Uid:af60ecf0-185c-46c3-9aed-e5c51cd74bb3,Namespace:calico-system,Attempt:0,} returns sandbox id \"c6425352f70dd93ba98bcfd919d94bfd4f79777969069c629e9538829911a4cd\"" Nov 4 23:56:42.350067 containerd[1627]: time="2025-11-04T23:56:42.350034725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 23:56:42.786415 containerd[1627]: time="2025-11-04T23:56:42.786287663Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:42.788117 containerd[1627]: time="2025-11-04T23:56:42.788051487Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 23:56:42.788388 containerd[1627]: time="2025-11-04T23:56:42.788169638Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 4 23:56:42.788607 kubelet[2826]: E1104 23:56:42.788457 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:56:42.789082 kubelet[2826]: E1104 23:56:42.788612 2826 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:56:42.789082 kubelet[2826]: E1104 23:56:42.788889 2826 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h6nn7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7b747dc6fd-58ml9_calico-system(af60ecf0-185c-46c3-9aed-e5c51cd74bb3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:42.790568 kubelet[2826]: E1104 23:56:42.790511 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b747dc6fd-58ml9" podUID="af60ecf0-185c-46c3-9aed-e5c51cd74bb3" Nov 4 23:56:42.965961 containerd[1627]: time="2025-11-04T23:56:42.965905738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4ms7h,Uid:380d0997-b155-4f76-994b-7e2911c8cbf8,Namespace:calico-system,Attempt:0,}" Nov 4 23:56:42.976722 containerd[1627]: time="2025-11-04T23:56:42.976641283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677647d65b-8x7v5,Uid:931996dd-fd1b-4af9-a724-280dd54dbe3b,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:56:43.170622 kubelet[2826]: E1104 23:56:43.170323 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b747dc6fd-58ml9" podUID="af60ecf0-185c-46c3-9aed-e5c51cd74bb3" Nov 4 23:56:43.200948 systemd-networkd[1526]: cali8f5c4f3dd45: Link UP Nov 4 23:56:43.201649 systemd-networkd[1526]: cali8f5c4f3dd45: Gained carrier Nov 4 23:56:43.233178 kubelet[2826]: I1104 23:56:43.233058 2826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rxbfs" podStartSLOduration=37.233042491 podStartE2EDuration="37.233042491s" podCreationTimestamp="2025-11-04 23:56:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:56:43.220839332 +0000 UTC m=+42.371578446" watchObservedRunningTime="2025-11-04 23:56:43.233042491 +0000 UTC m=+42.383781595" Nov 4 23:56:43.235728 containerd[1627]: 2025-11-04 23:56:43.058 [INFO][4367] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 23:56:43.235728 containerd[1627]: 2025-11-04 23:56:43.076 [INFO][4367] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487--0--0--n--1c2c5ddea4-k8s-csi--node--driver--4ms7h-eth0 csi-node-driver- calico-system 380d0997-b155-4f76-994b-7e2911c8cbf8 742 0 2025-11-04 23:56:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4487-0-0-n-1c2c5ddea4 csi-node-driver-4ms7h eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8f5c4f3dd45 [] [] }} ContainerID="de4d0f29dc5140a1f82ccfd3e6482e99a57a1ccae24ef384efbd5e8827f3d174" Namespace="calico-system" Pod="csi-node-driver-4ms7h" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-csi--node--driver--4ms7h-" Nov 4 23:56:43.235728 containerd[1627]: 2025-11-04 23:56:43.076 [INFO][4367] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="de4d0f29dc5140a1f82ccfd3e6482e99a57a1ccae24ef384efbd5e8827f3d174" Namespace="calico-system" Pod="csi-node-driver-4ms7h" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-csi--node--driver--4ms7h-eth0" Nov 4 23:56:43.235728 containerd[1627]: 2025-11-04 23:56:43.116 [INFO][4402] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de4d0f29dc5140a1f82ccfd3e6482e99a57a1ccae24ef384efbd5e8827f3d174" HandleID="k8s-pod-network.de4d0f29dc5140a1f82ccfd3e6482e99a57a1ccae24ef384efbd5e8827f3d174" Workload="ci--4487--0--0--n--1c2c5ddea4-k8s-csi--node--driver--4ms7h-eth0" Nov 4 23:56:43.235728 containerd[1627]: 2025-11-04 23:56:43.118 [INFO][4402] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="de4d0f29dc5140a1f82ccfd3e6482e99a57a1ccae24ef384efbd5e8827f3d174" HandleID="k8s-pod-network.de4d0f29dc5140a1f82ccfd3e6482e99a57a1ccae24ef384efbd5e8827f3d174" Workload="ci--4487--0--0--n--1c2c5ddea4-k8s-csi--node--driver--4ms7h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487-0-0-n-1c2c5ddea4", "pod":"csi-node-driver-4ms7h", "timestamp":"2025-11-04 23:56:43.116677002 +0000 UTC"}, Hostname:"ci-4487-0-0-n-1c2c5ddea4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:56:43.235728 containerd[1627]: 2025-11-04 23:56:43.118 [INFO][4402] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:56:43.235728 containerd[1627]: 2025-11-04 23:56:43.118 [INFO][4402] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:56:43.235728 containerd[1627]: 2025-11-04 23:56:43.118 [INFO][4402] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487-0-0-n-1c2c5ddea4' Nov 4 23:56:43.235728 containerd[1627]: 2025-11-04 23:56:43.126 [INFO][4402] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.de4d0f29dc5140a1f82ccfd3e6482e99a57a1ccae24ef384efbd5e8827f3d174" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:43.235728 containerd[1627]: 2025-11-04 23:56:43.143 [INFO][4402] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:43.235728 containerd[1627]: 2025-11-04 23:56:43.147 [INFO][4402] ipam/ipam.go 511: Trying affinity for 192.168.97.0/26 host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:43.235728 containerd[1627]: 2025-11-04 23:56:43.149 [INFO][4402] ipam/ipam.go 158: Attempting to load block cidr=192.168.97.0/26 host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:43.235728 containerd[1627]: 2025-11-04 23:56:43.156 [INFO][4402] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.97.0/26 host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:43.235728 containerd[1627]: 2025-11-04 23:56:43.156 [INFO][4402] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.97.0/26 handle="k8s-pod-network.de4d0f29dc5140a1f82ccfd3e6482e99a57a1ccae24ef384efbd5e8827f3d174" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:43.235728 containerd[1627]: 2025-11-04 23:56:43.161 [INFO][4402] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.de4d0f29dc5140a1f82ccfd3e6482e99a57a1ccae24ef384efbd5e8827f3d174 Nov 4 23:56:43.235728 containerd[1627]: 2025-11-04 23:56:43.169 [INFO][4402] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.97.0/26 handle="k8s-pod-network.de4d0f29dc5140a1f82ccfd3e6482e99a57a1ccae24ef384efbd5e8827f3d174" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:43.235728 containerd[1627]: 2025-11-04 23:56:43.190 [INFO][4402] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.97.4/26] block=192.168.97.0/26 handle="k8s-pod-network.de4d0f29dc5140a1f82ccfd3e6482e99a57a1ccae24ef384efbd5e8827f3d174" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:43.235728 containerd[1627]: 2025-11-04 23:56:43.191 [INFO][4402] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.97.4/26] handle="k8s-pod-network.de4d0f29dc5140a1f82ccfd3e6482e99a57a1ccae24ef384efbd5e8827f3d174" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:43.235728 containerd[1627]: 2025-11-04 23:56:43.191 [INFO][4402] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:56:43.235728 containerd[1627]: 2025-11-04 23:56:43.191 [INFO][4402] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.97.4/26] IPv6=[] ContainerID="de4d0f29dc5140a1f82ccfd3e6482e99a57a1ccae24ef384efbd5e8827f3d174" HandleID="k8s-pod-network.de4d0f29dc5140a1f82ccfd3e6482e99a57a1ccae24ef384efbd5e8827f3d174" Workload="ci--4487--0--0--n--1c2c5ddea4-k8s-csi--node--driver--4ms7h-eth0" Nov 4 23:56:43.237483 containerd[1627]: 2025-11-04 23:56:43.198 [INFO][4367] cni-plugin/k8s.go 418: Populated endpoint ContainerID="de4d0f29dc5140a1f82ccfd3e6482e99a57a1ccae24ef384efbd5e8827f3d174" Namespace="calico-system" Pod="csi-node-driver-4ms7h" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-csi--node--driver--4ms7h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--0--n--1c2c5ddea4-k8s-csi--node--driver--4ms7h-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"380d0997-b155-4f76-994b-7e2911c8cbf8", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 56, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-0-n-1c2c5ddea4", ContainerID:"", Pod:"csi-node-driver-4ms7h", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.97.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f5c4f3dd45", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:43.237483 containerd[1627]: 2025-11-04 23:56:43.198 [INFO][4367] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.4/32] ContainerID="de4d0f29dc5140a1f82ccfd3e6482e99a57a1ccae24ef384efbd5e8827f3d174" Namespace="calico-system" Pod="csi-node-driver-4ms7h" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-csi--node--driver--4ms7h-eth0" Nov 4 23:56:43.237483 containerd[1627]: 2025-11-04 23:56:43.198 [INFO][4367] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f5c4f3dd45 ContainerID="de4d0f29dc5140a1f82ccfd3e6482e99a57a1ccae24ef384efbd5e8827f3d174" Namespace="calico-system" Pod="csi-node-driver-4ms7h" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-csi--node--driver--4ms7h-eth0" Nov 4 23:56:43.237483 containerd[1627]: 2025-11-04 23:56:43.201 [INFO][4367] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de4d0f29dc5140a1f82ccfd3e6482e99a57a1ccae24ef384efbd5e8827f3d174" Namespace="calico-system" Pod="csi-node-driver-4ms7h" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-csi--node--driver--4ms7h-eth0" Nov 4 23:56:43.237483 containerd[1627]: 2025-11-04 23:56:43.201 [INFO][4367] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="de4d0f29dc5140a1f82ccfd3e6482e99a57a1ccae24ef384efbd5e8827f3d174" Namespace="calico-system" Pod="csi-node-driver-4ms7h" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-csi--node--driver--4ms7h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--0--n--1c2c5ddea4-k8s-csi--node--driver--4ms7h-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"380d0997-b155-4f76-994b-7e2911c8cbf8", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 56, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-0-n-1c2c5ddea4", ContainerID:"de4d0f29dc5140a1f82ccfd3e6482e99a57a1ccae24ef384efbd5e8827f3d174", Pod:"csi-node-driver-4ms7h", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.97.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f5c4f3dd45", MAC:"ea:c2:f1:8e:9a:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:43.237483 containerd[1627]: 2025-11-04 23:56:43.232 [INFO][4367] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="de4d0f29dc5140a1f82ccfd3e6482e99a57a1ccae24ef384efbd5e8827f3d174" Namespace="calico-system" Pod="csi-node-driver-4ms7h" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-csi--node--driver--4ms7h-eth0" Nov 4 23:56:43.276255 containerd[1627]: time="2025-11-04T23:56:43.276210845Z" level=info msg="connecting to shim de4d0f29dc5140a1f82ccfd3e6482e99a57a1ccae24ef384efbd5e8827f3d174" address="unix:///run/containerd/s/542079974e3de9f72f9e91e98a4b06b91b0de396afcb44ac0e2440bb8ecd53a1" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:43.314733 systemd[1]: Started cri-containerd-de4d0f29dc5140a1f82ccfd3e6482e99a57a1ccae24ef384efbd5e8827f3d174.scope - libcontainer container de4d0f29dc5140a1f82ccfd3e6482e99a57a1ccae24ef384efbd5e8827f3d174. Nov 4 23:56:43.322147 systemd-networkd[1526]: cali13827236a10: Link UP Nov 4 23:56:43.323177 systemd-networkd[1526]: cali13827236a10: Gained carrier Nov 4 23:56:43.342171 containerd[1627]: 2025-11-04 23:56:43.068 [INFO][4377] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 23:56:43.342171 containerd[1627]: 2025-11-04 23:56:43.093 [INFO][4377] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487--0--0--n--1c2c5ddea4-k8s-calico--apiserver--677647d65b--8x7v5-eth0 calico-apiserver-677647d65b- calico-apiserver 931996dd-fd1b-4af9-a724-280dd54dbe3b 848 0 2025-11-04 23:56:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:677647d65b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4487-0-0-n-1c2c5ddea4 calico-apiserver-677647d65b-8x7v5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali13827236a10 [] [] }} ContainerID="82efaa0415ad4e33727b8538896ae299051045774c67ca0bc6a571a7c1c9bdd8" Namespace="calico-apiserver" Pod="calico-apiserver-677647d65b-8x7v5" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-calico--apiserver--677647d65b--8x7v5-" Nov 4 23:56:43.342171 containerd[1627]: 2025-11-04 23:56:43.094 [INFO][4377] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="82efaa0415ad4e33727b8538896ae299051045774c67ca0bc6a571a7c1c9bdd8" Namespace="calico-apiserver" Pod="calico-apiserver-677647d65b-8x7v5" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-calico--apiserver--677647d65b--8x7v5-eth0" Nov 4 23:56:43.342171 containerd[1627]: 2025-11-04 23:56:43.144 [INFO][4409] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="82efaa0415ad4e33727b8538896ae299051045774c67ca0bc6a571a7c1c9bdd8" HandleID="k8s-pod-network.82efaa0415ad4e33727b8538896ae299051045774c67ca0bc6a571a7c1c9bdd8" Workload="ci--4487--0--0--n--1c2c5ddea4-k8s-calico--apiserver--677647d65b--8x7v5-eth0" Nov 4 23:56:43.342171 containerd[1627]: 2025-11-04 23:56:43.145 [INFO][4409] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="82efaa0415ad4e33727b8538896ae299051045774c67ca0bc6a571a7c1c9bdd8" HandleID="k8s-pod-network.82efaa0415ad4e33727b8538896ae299051045774c67ca0bc6a571a7c1c9bdd8" Workload="ci--4487--0--0--n--1c2c5ddea4-k8s-calico--apiserver--677647d65b--8x7v5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4487-0-0-n-1c2c5ddea4", "pod":"calico-apiserver-677647d65b-8x7v5", "timestamp":"2025-11-04 23:56:43.144863033 +0000 UTC"}, Hostname:"ci-4487-0-0-n-1c2c5ddea4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:56:43.342171 containerd[1627]: 2025-11-04 23:56:43.145 [INFO][4409] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:56:43.342171 containerd[1627]: 2025-11-04 23:56:43.191 [INFO][4409] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:56:43.342171 containerd[1627]: 2025-11-04 23:56:43.191 [INFO][4409] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487-0-0-n-1c2c5ddea4' Nov 4 23:56:43.342171 containerd[1627]: 2025-11-04 23:56:43.236 [INFO][4409] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.82efaa0415ad4e33727b8538896ae299051045774c67ca0bc6a571a7c1c9bdd8" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:43.342171 containerd[1627]: 2025-11-04 23:56:43.246 [INFO][4409] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:43.342171 containerd[1627]: 2025-11-04 23:56:43.264 [INFO][4409] ipam/ipam.go 511: Trying affinity for 192.168.97.0/26 host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:43.342171 containerd[1627]: 2025-11-04 23:56:43.269 [INFO][4409] ipam/ipam.go 158: Attempting to load block cidr=192.168.97.0/26 host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:43.342171 containerd[1627]: 2025-11-04 23:56:43.275 [INFO][4409] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.97.0/26 host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:43.342171 containerd[1627]: 2025-11-04 23:56:43.275 [INFO][4409] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.97.0/26 handle="k8s-pod-network.82efaa0415ad4e33727b8538896ae299051045774c67ca0bc6a571a7c1c9bdd8" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:43.342171 containerd[1627]: 2025-11-04 23:56:43.279 [INFO][4409] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.82efaa0415ad4e33727b8538896ae299051045774c67ca0bc6a571a7c1c9bdd8 Nov 4 23:56:43.342171 containerd[1627]: 2025-11-04 23:56:43.283 [INFO][4409] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.97.0/26 handle="k8s-pod-network.82efaa0415ad4e33727b8538896ae299051045774c67ca0bc6a571a7c1c9bdd8" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:43.342171 containerd[1627]: 2025-11-04 23:56:43.297 [INFO][4409] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.97.5/26] block=192.168.97.0/26 handle="k8s-pod-network.82efaa0415ad4e33727b8538896ae299051045774c67ca0bc6a571a7c1c9bdd8" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:43.342171 containerd[1627]: 2025-11-04 23:56:43.298 [INFO][4409] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.97.5/26] handle="k8s-pod-network.82efaa0415ad4e33727b8538896ae299051045774c67ca0bc6a571a7c1c9bdd8" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:43.342171 containerd[1627]: 2025-11-04 23:56:43.299 [INFO][4409] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:56:43.342171 containerd[1627]: 2025-11-04 23:56:43.299 [INFO][4409] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.97.5/26] IPv6=[] ContainerID="82efaa0415ad4e33727b8538896ae299051045774c67ca0bc6a571a7c1c9bdd8" HandleID="k8s-pod-network.82efaa0415ad4e33727b8538896ae299051045774c67ca0bc6a571a7c1c9bdd8" Workload="ci--4487--0--0--n--1c2c5ddea4-k8s-calico--apiserver--677647d65b--8x7v5-eth0" Nov 4 23:56:43.344641 containerd[1627]: 2025-11-04 23:56:43.306 [INFO][4377] cni-plugin/k8s.go 418: Populated endpoint ContainerID="82efaa0415ad4e33727b8538896ae299051045774c67ca0bc6a571a7c1c9bdd8" Namespace="calico-apiserver" Pod="calico-apiserver-677647d65b-8x7v5" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-calico--apiserver--677647d65b--8x7v5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--0--n--1c2c5ddea4-k8s-calico--apiserver--677647d65b--8x7v5-eth0", GenerateName:"calico-apiserver-677647d65b-", Namespace:"calico-apiserver", SelfLink:"", UID:"931996dd-fd1b-4af9-a724-280dd54dbe3b", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 56, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677647d65b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-0-n-1c2c5ddea4", ContainerID:"", Pod:"calico-apiserver-677647d65b-8x7v5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali13827236a10", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:43.344641 containerd[1627]: 2025-11-04 23:56:43.306 [INFO][4377] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.5/32] ContainerID="82efaa0415ad4e33727b8538896ae299051045774c67ca0bc6a571a7c1c9bdd8" Namespace="calico-apiserver" Pod="calico-apiserver-677647d65b-8x7v5" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-calico--apiserver--677647d65b--8x7v5-eth0" Nov 4 23:56:43.344641 containerd[1627]: 2025-11-04 23:56:43.306 [INFO][4377] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali13827236a10 ContainerID="82efaa0415ad4e33727b8538896ae299051045774c67ca0bc6a571a7c1c9bdd8" Namespace="calico-apiserver" Pod="calico-apiserver-677647d65b-8x7v5" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-calico--apiserver--677647d65b--8x7v5-eth0" Nov 4 23:56:43.344641 containerd[1627]: 2025-11-04 23:56:43.322 [INFO][4377] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="82efaa0415ad4e33727b8538896ae299051045774c67ca0bc6a571a7c1c9bdd8" Namespace="calico-apiserver" Pod="calico-apiserver-677647d65b-8x7v5" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-calico--apiserver--677647d65b--8x7v5-eth0" Nov 4 23:56:43.344641 containerd[1627]: 2025-11-04 23:56:43.322 [INFO][4377] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="82efaa0415ad4e33727b8538896ae299051045774c67ca0bc6a571a7c1c9bdd8" Namespace="calico-apiserver" Pod="calico-apiserver-677647d65b-8x7v5" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-calico--apiserver--677647d65b--8x7v5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--0--n--1c2c5ddea4-k8s-calico--apiserver--677647d65b--8x7v5-eth0", GenerateName:"calico-apiserver-677647d65b-", Namespace:"calico-apiserver", SelfLink:"", UID:"931996dd-fd1b-4af9-a724-280dd54dbe3b", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 56, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677647d65b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-0-n-1c2c5ddea4", ContainerID:"82efaa0415ad4e33727b8538896ae299051045774c67ca0bc6a571a7c1c9bdd8", Pod:"calico-apiserver-677647d65b-8x7v5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali13827236a10", MAC:"5a:85:af:57:5d:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:43.344641 containerd[1627]: 2025-11-04 23:56:43.337 [INFO][4377] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="82efaa0415ad4e33727b8538896ae299051045774c67ca0bc6a571a7c1c9bdd8" Namespace="calico-apiserver" Pod="calico-apiserver-677647d65b-8x7v5" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-calico--apiserver--677647d65b--8x7v5-eth0" Nov 4 23:56:43.383664 containerd[1627]: time="2025-11-04T23:56:43.383622697Z" level=info msg="connecting to shim 82efaa0415ad4e33727b8538896ae299051045774c67ca0bc6a571a7c1c9bdd8" address="unix:///run/containerd/s/d138d4ce382d87997dee9f10d0cc849f7b33681adc8d140261bf043a54130335" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:43.392208 containerd[1627]: time="2025-11-04T23:56:43.390841774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4ms7h,Uid:380d0997-b155-4f76-994b-7e2911c8cbf8,Namespace:calico-system,Attempt:0,} returns sandbox id \"de4d0f29dc5140a1f82ccfd3e6482e99a57a1ccae24ef384efbd5e8827f3d174\"" Nov 4 23:56:43.397368 containerd[1627]: time="2025-11-04T23:56:43.397048030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 23:56:43.439619 systemd[1]: Started cri-containerd-82efaa0415ad4e33727b8538896ae299051045774c67ca0bc6a571a7c1c9bdd8.scope - libcontainer container 82efaa0415ad4e33727b8538896ae299051045774c67ca0bc6a571a7c1c9bdd8. Nov 4 23:56:43.488833 containerd[1627]: time="2025-11-04T23:56:43.487060544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677647d65b-8x7v5,Uid:931996dd-fd1b-4af9-a724-280dd54dbe3b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"82efaa0415ad4e33727b8538896ae299051045774c67ca0bc6a571a7c1c9bdd8\"" Nov 4 23:56:43.833984 systemd-networkd[1526]: calie091a8f32da: Gained IPv6LL Nov 4 23:56:43.884997 containerd[1627]: time="2025-11-04T23:56:43.884876509Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:43.886613 containerd[1627]: time="2025-11-04T23:56:43.886508748Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 23:56:43.886807 containerd[1627]: time="2025-11-04T23:56:43.886641777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 4 23:56:43.886946 kubelet[2826]: E1104 23:56:43.886867 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:56:43.887559 kubelet[2826]: E1104 23:56:43.886943 2826 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:56:43.888559 kubelet[2826]: E1104 23:56:43.888451 2826 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t46m6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4ms7h_calico-system(380d0997-b155-4f76-994b-7e2911c8cbf8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:43.888960 containerd[1627]: time="2025-11-04T23:56:43.888788147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:56:43.961780 systemd-networkd[1526]: cali60eb23f9707: Gained IPv6LL Nov 4 23:56:43.965407 containerd[1627]: time="2025-11-04T23:56:43.965363466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677647d65b-vprpk,Uid:2f84ec82-93eb-45f7-9d50-0aaa6ffeaaaa,Namespace:calico-apiserver,Attempt:0,}" Nov 4 23:56:43.965894 containerd[1627]: time="2025-11-04T23:56:43.965565744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pxcr4,Uid:13dd94ef-4602-4fcf-b36d-5a6661064a5d,Namespace:calico-system,Attempt:0,}" Nov 4 23:56:44.141608 systemd-networkd[1526]: cali7d717ee47d1: Link UP Nov 4 23:56:44.144683 systemd-networkd[1526]: cali7d717ee47d1: Gained carrier Nov 4 23:56:44.158868 containerd[1627]: 2025-11-04 23:56:44.019 [INFO][4531] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 23:56:44.158868 containerd[1627]: 2025-11-04 23:56:44.036 [INFO][4531] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487--0--0--n--1c2c5ddea4-k8s-goldmane--666569f655--pxcr4-eth0 goldmane-666569f655- calico-system 13dd94ef-4602-4fcf-b36d-5a6661064a5d 844 0 2025-11-04 23:56:19 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4487-0-0-n-1c2c5ddea4 goldmane-666569f655-pxcr4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7d717ee47d1 [] [] }} ContainerID="8d6559a74a5b56c5953de519c9fb658d8ba2f62d31a465f89f59d2daf8e07391" Namespace="calico-system" Pod="goldmane-666569f655-pxcr4" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-goldmane--666569f655--pxcr4-" Nov 4 23:56:44.158868 containerd[1627]: 2025-11-04 23:56:44.036 [INFO][4531] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8d6559a74a5b56c5953de519c9fb658d8ba2f62d31a465f89f59d2daf8e07391" Namespace="calico-system" Pod="goldmane-666569f655-pxcr4" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-goldmane--666569f655--pxcr4-eth0" Nov 4 23:56:44.158868 containerd[1627]: 2025-11-04 23:56:44.085 [INFO][4551] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8d6559a74a5b56c5953de519c9fb658d8ba2f62d31a465f89f59d2daf8e07391" HandleID="k8s-pod-network.8d6559a74a5b56c5953de519c9fb658d8ba2f62d31a465f89f59d2daf8e07391" Workload="ci--4487--0--0--n--1c2c5ddea4-k8s-goldmane--666569f655--pxcr4-eth0" Nov 4 23:56:44.158868 containerd[1627]: 2025-11-04 23:56:44.085 [INFO][4551] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8d6559a74a5b56c5953de519c9fb658d8ba2f62d31a465f89f59d2daf8e07391" HandleID="k8s-pod-network.8d6559a74a5b56c5953de519c9fb658d8ba2f62d31a465f89f59d2daf8e07391" Workload="ci--4487--0--0--n--1c2c5ddea4-k8s-goldmane--666569f655--pxcr4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd5a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487-0-0-n-1c2c5ddea4", "pod":"goldmane-666569f655-pxcr4", "timestamp":"2025-11-04 23:56:44.085562297 +0000 UTC"}, Hostname:"ci-4487-0-0-n-1c2c5ddea4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:56:44.158868 containerd[1627]: 2025-11-04 23:56:44.085 [INFO][4551] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:56:44.158868 containerd[1627]: 2025-11-04 23:56:44.085 [INFO][4551] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:56:44.158868 containerd[1627]: 2025-11-04 23:56:44.085 [INFO][4551] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487-0-0-n-1c2c5ddea4' Nov 4 23:56:44.158868 containerd[1627]: 2025-11-04 23:56:44.100 [INFO][4551] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8d6559a74a5b56c5953de519c9fb658d8ba2f62d31a465f89f59d2daf8e07391" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:44.158868 containerd[1627]: 2025-11-04 23:56:44.106 [INFO][4551] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:44.158868 containerd[1627]: 2025-11-04 23:56:44.116 [INFO][4551] ipam/ipam.go 511: Trying affinity for 192.168.97.0/26 host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:44.158868 containerd[1627]: 2025-11-04 23:56:44.118 [INFO][4551] ipam/ipam.go 158: Attempting to load block cidr=192.168.97.0/26 host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:44.158868 containerd[1627]: 2025-11-04 23:56:44.121 [INFO][4551] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.97.0/26 host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:44.158868 containerd[1627]: 2025-11-04 23:56:44.121 [INFO][4551] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.97.0/26 handle="k8s-pod-network.8d6559a74a5b56c5953de519c9fb658d8ba2f62d31a465f89f59d2daf8e07391" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:44.158868 containerd[1627]: 2025-11-04 23:56:44.122 [INFO][4551] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8d6559a74a5b56c5953de519c9fb658d8ba2f62d31a465f89f59d2daf8e07391 Nov 4 23:56:44.158868 containerd[1627]: 2025-11-04 23:56:44.126 [INFO][4551] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.97.0/26 handle="k8s-pod-network.8d6559a74a5b56c5953de519c9fb658d8ba2f62d31a465f89f59d2daf8e07391" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:44.158868 containerd[1627]: 2025-11-04 23:56:44.132 [INFO][4551] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.97.6/26] block=192.168.97.0/26 handle="k8s-pod-network.8d6559a74a5b56c5953de519c9fb658d8ba2f62d31a465f89f59d2daf8e07391" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:44.158868 containerd[1627]: 2025-11-04 23:56:44.132 [INFO][4551] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.97.6/26] handle="k8s-pod-network.8d6559a74a5b56c5953de519c9fb658d8ba2f62d31a465f89f59d2daf8e07391" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:44.158868 containerd[1627]: 2025-11-04 23:56:44.132 [INFO][4551] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:56:44.158868 containerd[1627]: 2025-11-04 23:56:44.132 [INFO][4551] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.97.6/26] IPv6=[] ContainerID="8d6559a74a5b56c5953de519c9fb658d8ba2f62d31a465f89f59d2daf8e07391" HandleID="k8s-pod-network.8d6559a74a5b56c5953de519c9fb658d8ba2f62d31a465f89f59d2daf8e07391" Workload="ci--4487--0--0--n--1c2c5ddea4-k8s-goldmane--666569f655--pxcr4-eth0" Nov 4 23:56:44.160632 containerd[1627]: 2025-11-04 23:56:44.136 [INFO][4531] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8d6559a74a5b56c5953de519c9fb658d8ba2f62d31a465f89f59d2daf8e07391" Namespace="calico-system" Pod="goldmane-666569f655-pxcr4" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-goldmane--666569f655--pxcr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--0--n--1c2c5ddea4-k8s-goldmane--666569f655--pxcr4-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"13dd94ef-4602-4fcf-b36d-5a6661064a5d", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 56, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-0-n-1c2c5ddea4", ContainerID:"", Pod:"goldmane-666569f655-pxcr4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.97.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7d717ee47d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:44.160632 containerd[1627]: 2025-11-04 23:56:44.136 [INFO][4531] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.6/32] ContainerID="8d6559a74a5b56c5953de519c9fb658d8ba2f62d31a465f89f59d2daf8e07391" Namespace="calico-system" Pod="goldmane-666569f655-pxcr4" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-goldmane--666569f655--pxcr4-eth0" Nov 4 23:56:44.160632 containerd[1627]: 2025-11-04 23:56:44.136 [INFO][4531] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7d717ee47d1 ContainerID="8d6559a74a5b56c5953de519c9fb658d8ba2f62d31a465f89f59d2daf8e07391" Namespace="calico-system" Pod="goldmane-666569f655-pxcr4" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-goldmane--666569f655--pxcr4-eth0" Nov 4 23:56:44.160632 containerd[1627]: 2025-11-04 23:56:44.145 [INFO][4531] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8d6559a74a5b56c5953de519c9fb658d8ba2f62d31a465f89f59d2daf8e07391" Namespace="calico-system" Pod="goldmane-666569f655-pxcr4" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-goldmane--666569f655--pxcr4-eth0" Nov 4 23:56:44.160632 containerd[1627]: 2025-11-04 23:56:44.145 [INFO][4531] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8d6559a74a5b56c5953de519c9fb658d8ba2f62d31a465f89f59d2daf8e07391" Namespace="calico-system" Pod="goldmane-666569f655-pxcr4" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-goldmane--666569f655--pxcr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--0--n--1c2c5ddea4-k8s-goldmane--666569f655--pxcr4-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"13dd94ef-4602-4fcf-b36d-5a6661064a5d", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 56, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-0-n-1c2c5ddea4", ContainerID:"8d6559a74a5b56c5953de519c9fb658d8ba2f62d31a465f89f59d2daf8e07391", Pod:"goldmane-666569f655-pxcr4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.97.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7d717ee47d1", MAC:"62:fc:a6:a2:66:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:44.160632 containerd[1627]: 2025-11-04 23:56:44.154 [INFO][4531] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8d6559a74a5b56c5953de519c9fb658d8ba2f62d31a465f89f59d2daf8e07391" Namespace="calico-system" Pod="goldmane-666569f655-pxcr4" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-goldmane--666569f655--pxcr4-eth0" Nov 4 23:56:44.181036 containerd[1627]: time="2025-11-04T23:56:44.180535691Z" level=info msg="connecting to shim 8d6559a74a5b56c5953de519c9fb658d8ba2f62d31a465f89f59d2daf8e07391" address="unix:///run/containerd/s/9a2817e352c705f3b8f1a67e8ff6c780678d062d0b1d977f16aca4ffc22370c4" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:44.181835 kubelet[2826]: E1104 23:56:44.181806 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b747dc6fd-58ml9" podUID="af60ecf0-185c-46c3-9aed-e5c51cd74bb3" Nov 4 23:56:44.209588 systemd[1]: Started cri-containerd-8d6559a74a5b56c5953de519c9fb658d8ba2f62d31a465f89f59d2daf8e07391.scope - libcontainer container 8d6559a74a5b56c5953de519c9fb658d8ba2f62d31a465f89f59d2daf8e07391. Nov 4 23:56:44.238860 systemd-networkd[1526]: calicbdba4d5a1c: Link UP Nov 4 23:56:44.238994 systemd-networkd[1526]: calicbdba4d5a1c: Gained carrier Nov 4 23:56:44.252545 containerd[1627]: 2025-11-04 23:56:44.032 [INFO][4525] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 23:56:44.252545 containerd[1627]: 2025-11-04 23:56:44.053 [INFO][4525] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487--0--0--n--1c2c5ddea4-k8s-calico--apiserver--677647d65b--vprpk-eth0 calico-apiserver-677647d65b- calico-apiserver 2f84ec82-93eb-45f7-9d50-0aaa6ffeaaaa 849 0 2025-11-04 23:56:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:677647d65b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4487-0-0-n-1c2c5ddea4 calico-apiserver-677647d65b-vprpk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicbdba4d5a1c [] [] }} ContainerID="442fb72fca9c115a60c3c3e153991920abf34418520704545e110fa7d62b7044" Namespace="calico-apiserver" Pod="calico-apiserver-677647d65b-vprpk" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-calico--apiserver--677647d65b--vprpk-" Nov 4 23:56:44.252545 containerd[1627]: 2025-11-04 23:56:44.054 [INFO][4525] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="442fb72fca9c115a60c3c3e153991920abf34418520704545e110fa7d62b7044" Namespace="calico-apiserver" Pod="calico-apiserver-677647d65b-vprpk" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-calico--apiserver--677647d65b--vprpk-eth0" Nov 4 23:56:44.252545 containerd[1627]: 2025-11-04 23:56:44.105 [INFO][4557] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="442fb72fca9c115a60c3c3e153991920abf34418520704545e110fa7d62b7044" HandleID="k8s-pod-network.442fb72fca9c115a60c3c3e153991920abf34418520704545e110fa7d62b7044" Workload="ci--4487--0--0--n--1c2c5ddea4-k8s-calico--apiserver--677647d65b--vprpk-eth0" Nov 4 23:56:44.252545 containerd[1627]: 2025-11-04 23:56:44.106 [INFO][4557] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="442fb72fca9c115a60c3c3e153991920abf34418520704545e110fa7d62b7044" HandleID="k8s-pod-network.442fb72fca9c115a60c3c3e153991920abf34418520704545e110fa7d62b7044" Workload="ci--4487--0--0--n--1c2c5ddea4-k8s-calico--apiserver--677647d65b--vprpk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4f40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4487-0-0-n-1c2c5ddea4", "pod":"calico-apiserver-677647d65b-vprpk", "timestamp":"2025-11-04 23:56:44.105873339 +0000 UTC"}, Hostname:"ci-4487-0-0-n-1c2c5ddea4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:56:44.252545 containerd[1627]: 2025-11-04 23:56:44.106 [INFO][4557] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:56:44.252545 containerd[1627]: 2025-11-04 23:56:44.133 [INFO][4557] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:56:44.252545 containerd[1627]: 2025-11-04 23:56:44.134 [INFO][4557] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487-0-0-n-1c2c5ddea4' Nov 4 23:56:44.252545 containerd[1627]: 2025-11-04 23:56:44.201 [INFO][4557] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.442fb72fca9c115a60c3c3e153991920abf34418520704545e110fa7d62b7044" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:44.252545 containerd[1627]: 2025-11-04 23:56:44.211 [INFO][4557] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:44.252545 containerd[1627]: 2025-11-04 23:56:44.216 [INFO][4557] ipam/ipam.go 511: Trying affinity for 192.168.97.0/26 host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:44.252545 containerd[1627]: 2025-11-04 23:56:44.218 [INFO][4557] ipam/ipam.go 158: Attempting to load block cidr=192.168.97.0/26 host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:44.252545 containerd[1627]: 2025-11-04 23:56:44.220 [INFO][4557] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.97.0/26 host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:44.252545 containerd[1627]: 2025-11-04 23:56:44.221 [INFO][4557] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.97.0/26 handle="k8s-pod-network.442fb72fca9c115a60c3c3e153991920abf34418520704545e110fa7d62b7044" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:44.252545 containerd[1627]: 2025-11-04 23:56:44.223 [INFO][4557] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.442fb72fca9c115a60c3c3e153991920abf34418520704545e110fa7d62b7044 Nov 4 23:56:44.252545 containerd[1627]: 2025-11-04 23:56:44.227 [INFO][4557] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.97.0/26 handle="k8s-pod-network.442fb72fca9c115a60c3c3e153991920abf34418520704545e110fa7d62b7044" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:44.252545 containerd[1627]: 2025-11-04 23:56:44.234 [INFO][4557] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.97.7/26] block=192.168.97.0/26 handle="k8s-pod-network.442fb72fca9c115a60c3c3e153991920abf34418520704545e110fa7d62b7044" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:44.252545 containerd[1627]: 2025-11-04 23:56:44.234 [INFO][4557] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.97.7/26] handle="k8s-pod-network.442fb72fca9c115a60c3c3e153991920abf34418520704545e110fa7d62b7044" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:44.252545 containerd[1627]: 2025-11-04 23:56:44.234 [INFO][4557] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:56:44.252545 containerd[1627]: 2025-11-04 23:56:44.234 [INFO][4557] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.97.7/26] IPv6=[] ContainerID="442fb72fca9c115a60c3c3e153991920abf34418520704545e110fa7d62b7044" HandleID="k8s-pod-network.442fb72fca9c115a60c3c3e153991920abf34418520704545e110fa7d62b7044" Workload="ci--4487--0--0--n--1c2c5ddea4-k8s-calico--apiserver--677647d65b--vprpk-eth0" Nov 4 23:56:44.252994 containerd[1627]: 2025-11-04 23:56:44.236 [INFO][4525] cni-plugin/k8s.go 418: Populated endpoint ContainerID="442fb72fca9c115a60c3c3e153991920abf34418520704545e110fa7d62b7044" Namespace="calico-apiserver" Pod="calico-apiserver-677647d65b-vprpk" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-calico--apiserver--677647d65b--vprpk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--0--n--1c2c5ddea4-k8s-calico--apiserver--677647d65b--vprpk-eth0", GenerateName:"calico-apiserver-677647d65b-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f84ec82-93eb-45f7-9d50-0aaa6ffeaaaa", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 56, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677647d65b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-0-n-1c2c5ddea4", ContainerID:"", Pod:"calico-apiserver-677647d65b-vprpk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicbdba4d5a1c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:44.252994 containerd[1627]: 2025-11-04 23:56:44.236 [INFO][4525] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.7/32] ContainerID="442fb72fca9c115a60c3c3e153991920abf34418520704545e110fa7d62b7044" Namespace="calico-apiserver" Pod="calico-apiserver-677647d65b-vprpk" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-calico--apiserver--677647d65b--vprpk-eth0" Nov 4 23:56:44.252994 containerd[1627]: 2025-11-04 23:56:44.236 [INFO][4525] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicbdba4d5a1c ContainerID="442fb72fca9c115a60c3c3e153991920abf34418520704545e110fa7d62b7044" Namespace="calico-apiserver" Pod="calico-apiserver-677647d65b-vprpk" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-calico--apiserver--677647d65b--vprpk-eth0" Nov 4 23:56:44.252994 containerd[1627]: 2025-11-04 23:56:44.239 [INFO][4525] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="442fb72fca9c115a60c3c3e153991920abf34418520704545e110fa7d62b7044" Namespace="calico-apiserver" Pod="calico-apiserver-677647d65b-vprpk" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-calico--apiserver--677647d65b--vprpk-eth0" Nov 4 23:56:44.252994 containerd[1627]: 2025-11-04 23:56:44.239 [INFO][4525] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="442fb72fca9c115a60c3c3e153991920abf34418520704545e110fa7d62b7044" Namespace="calico-apiserver" Pod="calico-apiserver-677647d65b-vprpk" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-calico--apiserver--677647d65b--vprpk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--0--n--1c2c5ddea4-k8s-calico--apiserver--677647d65b--vprpk-eth0", GenerateName:"calico-apiserver-677647d65b-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f84ec82-93eb-45f7-9d50-0aaa6ffeaaaa", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 56, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677647d65b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-0-n-1c2c5ddea4", ContainerID:"442fb72fca9c115a60c3c3e153991920abf34418520704545e110fa7d62b7044", Pod:"calico-apiserver-677647d65b-vprpk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicbdba4d5a1c", MAC:"0a:2a:89:57:3d:ac", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:44.252994 containerd[1627]: 2025-11-04 23:56:44.250 [INFO][4525] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="442fb72fca9c115a60c3c3e153991920abf34418520704545e110fa7d62b7044" Namespace="calico-apiserver" Pod="calico-apiserver-677647d65b-vprpk" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-calico--apiserver--677647d65b--vprpk-eth0" Nov 4 23:56:44.276154 containerd[1627]: time="2025-11-04T23:56:44.276071395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pxcr4,Uid:13dd94ef-4602-4fcf-b36d-5a6661064a5d,Namespace:calico-system,Attempt:0,} returns sandbox id \"8d6559a74a5b56c5953de519c9fb658d8ba2f62d31a465f89f59d2daf8e07391\"" Nov 4 23:56:44.280859 containerd[1627]: time="2025-11-04T23:56:44.280824175Z" level=info msg="connecting to shim 442fb72fca9c115a60c3c3e153991920abf34418520704545e110fa7d62b7044" address="unix:///run/containerd/s/5f9599f77e72aa059f49b7986dddd5abfaa1615da5d07f348e5a1b945c872ea5" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:44.303589 systemd[1]: Started cri-containerd-442fb72fca9c115a60c3c3e153991920abf34418520704545e110fa7d62b7044.scope - libcontainer container 442fb72fca9c115a60c3c3e153991920abf34418520704545e110fa7d62b7044. Nov 4 23:56:44.323762 containerd[1627]: time="2025-11-04T23:56:44.323735698Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:44.324854 containerd[1627]: time="2025-11-04T23:56:44.324812350Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:56:44.325076 containerd[1627]: time="2025-11-04T23:56:44.324949767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:56:44.325383 kubelet[2826]: E1104 23:56:44.325191 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:56:44.325383 kubelet[2826]: E1104 23:56:44.325240 2826 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:56:44.326038 kubelet[2826]: E1104 23:56:44.325539 2826 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6w8gg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-677647d65b-8x7v5_calico-apiserver(931996dd-fd1b-4af9-a724-280dd54dbe3b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:44.326599 containerd[1627]: time="2025-11-04T23:56:44.325872721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 23:56:44.326940 kubelet[2826]: E1104 23:56:44.326912 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-677647d65b-8x7v5" podUID="931996dd-fd1b-4af9-a724-280dd54dbe3b" Nov 4 23:56:44.349404 containerd[1627]: time="2025-11-04T23:56:44.348750018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677647d65b-vprpk,Uid:2f84ec82-93eb-45f7-9d50-0aaa6ffeaaaa,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"442fb72fca9c115a60c3c3e153991920abf34418520704545e110fa7d62b7044\"" Nov 4 23:56:44.602619 systemd-networkd[1526]: cali13827236a10: Gained IPv6LL Nov 4 23:56:44.743778 containerd[1627]: time="2025-11-04T23:56:44.743682274Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:44.745469 containerd[1627]: time="2025-11-04T23:56:44.745377210Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 23:56:44.745607 containerd[1627]: time="2025-11-04T23:56:44.745520538Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 4 23:56:44.745858 kubelet[2826]: E1104 23:56:44.745716 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:56:44.745858 kubelet[2826]: E1104 23:56:44.745777 2826 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:56:44.746143 kubelet[2826]: E1104 23:56:44.746042 2826 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t46m6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4ms7h_calico-system(380d0997-b155-4f76-994b-7e2911c8cbf8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:44.747250 kubelet[2826]: E1104 23:56:44.747183 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4ms7h" podUID="380d0997-b155-4f76-994b-7e2911c8cbf8" Nov 4 23:56:44.747425 containerd[1627]: time="2025-11-04T23:56:44.747272913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 23:56:44.793708 systemd-networkd[1526]: cali8f5c4f3dd45: Gained IPv6LL Nov 4 23:56:44.967147 containerd[1627]: time="2025-11-04T23:56:44.966883538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cjmfr,Uid:2dd6ec4e-4d6c-4ec9-b118-6c9da0be2ab2,Namespace:kube-system,Attempt:0,}" Nov 4 23:56:45.059013 kubelet[2826]: I1104 23:56:45.058521 2826 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 23:56:45.137984 systemd-networkd[1526]: cali5987dec134c: Link UP Nov 4 23:56:45.139458 systemd-networkd[1526]: cali5987dec134c: Gained carrier Nov 4 23:56:45.163408 containerd[1627]: 2025-11-04 23:56:45.008 [INFO][4688] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 4 23:56:45.163408 containerd[1627]: 2025-11-04 23:56:45.022 [INFO][4688] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487--0--0--n--1c2c5ddea4-k8s-coredns--674b8bbfcf--cjmfr-eth0 coredns-674b8bbfcf- kube-system 2dd6ec4e-4d6c-4ec9-b118-6c9da0be2ab2 845 0 2025-11-04 23:56:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4487-0-0-n-1c2c5ddea4 coredns-674b8bbfcf-cjmfr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5987dec134c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ceb46b8149434de54752b5f2267d4d0093b13bf044f0a774e0d025ee7ec575ab" Namespace="kube-system" Pod="coredns-674b8bbfcf-cjmfr" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-coredns--674b8bbfcf--cjmfr-" Nov 4 23:56:45.163408 containerd[1627]: 2025-11-04 23:56:45.022 [INFO][4688] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ceb46b8149434de54752b5f2267d4d0093b13bf044f0a774e0d025ee7ec575ab" Namespace="kube-system" Pod="coredns-674b8bbfcf-cjmfr" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-coredns--674b8bbfcf--cjmfr-eth0" Nov 4 23:56:45.163408 containerd[1627]: 2025-11-04 23:56:45.060 [INFO][4701] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ceb46b8149434de54752b5f2267d4d0093b13bf044f0a774e0d025ee7ec575ab" HandleID="k8s-pod-network.ceb46b8149434de54752b5f2267d4d0093b13bf044f0a774e0d025ee7ec575ab" Workload="ci--4487--0--0--n--1c2c5ddea4-k8s-coredns--674b8bbfcf--cjmfr-eth0" Nov 4 23:56:45.163408 containerd[1627]: 2025-11-04 23:56:45.061 [INFO][4701] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ceb46b8149434de54752b5f2267d4d0093b13bf044f0a774e0d025ee7ec575ab" HandleID="k8s-pod-network.ceb46b8149434de54752b5f2267d4d0093b13bf044f0a774e0d025ee7ec575ab" Workload="ci--4487--0--0--n--1c2c5ddea4-k8s-coredns--674b8bbfcf--cjmfr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4487-0-0-n-1c2c5ddea4", "pod":"coredns-674b8bbfcf-cjmfr", "timestamp":"2025-11-04 23:56:45.060917899 +0000 UTC"}, Hostname:"ci-4487-0-0-n-1c2c5ddea4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 4 23:56:45.163408 containerd[1627]: 2025-11-04 23:56:45.061 [INFO][4701] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 4 23:56:45.163408 containerd[1627]: 2025-11-04 23:56:45.061 [INFO][4701] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 4 23:56:45.163408 containerd[1627]: 2025-11-04 23:56:45.061 [INFO][4701] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487-0-0-n-1c2c5ddea4' Nov 4 23:56:45.163408 containerd[1627]: 2025-11-04 23:56:45.076 [INFO][4701] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ceb46b8149434de54752b5f2267d4d0093b13bf044f0a774e0d025ee7ec575ab" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:45.163408 containerd[1627]: 2025-11-04 23:56:45.081 [INFO][4701] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:45.163408 containerd[1627]: 2025-11-04 23:56:45.101 [INFO][4701] ipam/ipam.go 511: Trying affinity for 192.168.97.0/26 host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:45.163408 containerd[1627]: 2025-11-04 23:56:45.107 [INFO][4701] ipam/ipam.go 158: Attempting to load block cidr=192.168.97.0/26 host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:45.163408 containerd[1627]: 2025-11-04 23:56:45.112 [INFO][4701] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.97.0/26 host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:45.163408 containerd[1627]: 2025-11-04 23:56:45.112 [INFO][4701] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.97.0/26 handle="k8s-pod-network.ceb46b8149434de54752b5f2267d4d0093b13bf044f0a774e0d025ee7ec575ab" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:45.163408 containerd[1627]: 2025-11-04 23:56:45.114 [INFO][4701] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ceb46b8149434de54752b5f2267d4d0093b13bf044f0a774e0d025ee7ec575ab Nov 4 23:56:45.163408 containerd[1627]: 2025-11-04 23:56:45.121 [INFO][4701] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.97.0/26 handle="k8s-pod-network.ceb46b8149434de54752b5f2267d4d0093b13bf044f0a774e0d025ee7ec575ab" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:45.163408 containerd[1627]: 2025-11-04 23:56:45.131 [INFO][4701] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.97.8/26] block=192.168.97.0/26 handle="k8s-pod-network.ceb46b8149434de54752b5f2267d4d0093b13bf044f0a774e0d025ee7ec575ab" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:45.163408 containerd[1627]: 2025-11-04 23:56:45.131 [INFO][4701] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.97.8/26] handle="k8s-pod-network.ceb46b8149434de54752b5f2267d4d0093b13bf044f0a774e0d025ee7ec575ab" host="ci-4487-0-0-n-1c2c5ddea4" Nov 4 23:56:45.163408 containerd[1627]: 2025-11-04 23:56:45.131 [INFO][4701] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 4 23:56:45.163408 containerd[1627]: 2025-11-04 23:56:45.131 [INFO][4701] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.97.8/26] IPv6=[] ContainerID="ceb46b8149434de54752b5f2267d4d0093b13bf044f0a774e0d025ee7ec575ab" HandleID="k8s-pod-network.ceb46b8149434de54752b5f2267d4d0093b13bf044f0a774e0d025ee7ec575ab" Workload="ci--4487--0--0--n--1c2c5ddea4-k8s-coredns--674b8bbfcf--cjmfr-eth0" Nov 4 23:56:45.164549 containerd[1627]: 2025-11-04 23:56:45.134 [INFO][4688] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ceb46b8149434de54752b5f2267d4d0093b13bf044f0a774e0d025ee7ec575ab" Namespace="kube-system" Pod="coredns-674b8bbfcf-cjmfr" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-coredns--674b8bbfcf--cjmfr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--0--n--1c2c5ddea4-k8s-coredns--674b8bbfcf--cjmfr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2dd6ec4e-4d6c-4ec9-b118-6c9da0be2ab2", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-0-n-1c2c5ddea4", ContainerID:"", Pod:"coredns-674b8bbfcf-cjmfr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5987dec134c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:45.164549 containerd[1627]: 2025-11-04 23:56:45.135 [INFO][4688] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.8/32] ContainerID="ceb46b8149434de54752b5f2267d4d0093b13bf044f0a774e0d025ee7ec575ab" Namespace="kube-system" Pod="coredns-674b8bbfcf-cjmfr" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-coredns--674b8bbfcf--cjmfr-eth0" Nov 4 23:56:45.164549 containerd[1627]: 2025-11-04 23:56:45.135 [INFO][4688] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5987dec134c ContainerID="ceb46b8149434de54752b5f2267d4d0093b13bf044f0a774e0d025ee7ec575ab" Namespace="kube-system" Pod="coredns-674b8bbfcf-cjmfr" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-coredns--674b8bbfcf--cjmfr-eth0" Nov 4 23:56:45.164549 containerd[1627]: 2025-11-04 23:56:45.141 [INFO][4688] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ceb46b8149434de54752b5f2267d4d0093b13bf044f0a774e0d025ee7ec575ab" Namespace="kube-system" Pod="coredns-674b8bbfcf-cjmfr" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-coredns--674b8bbfcf--cjmfr-eth0" Nov 4 23:56:45.164549 containerd[1627]: 2025-11-04 23:56:45.142 [INFO][4688] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ceb46b8149434de54752b5f2267d4d0093b13bf044f0a774e0d025ee7ec575ab" Namespace="kube-system" Pod="coredns-674b8bbfcf-cjmfr" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-coredns--674b8bbfcf--cjmfr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--0--n--1c2c5ddea4-k8s-coredns--674b8bbfcf--cjmfr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2dd6ec4e-4d6c-4ec9-b118-6c9da0be2ab2", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.November, 4, 23, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-0-n-1c2c5ddea4", ContainerID:"ceb46b8149434de54752b5f2267d4d0093b13bf044f0a774e0d025ee7ec575ab", Pod:"coredns-674b8bbfcf-cjmfr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5987dec134c", MAC:"32:6e:f2:9e:b7:6e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 4 23:56:45.164549 containerd[1627]: 2025-11-04 23:56:45.156 [INFO][4688] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ceb46b8149434de54752b5f2267d4d0093b13bf044f0a774e0d025ee7ec575ab" Namespace="kube-system" Pod="coredns-674b8bbfcf-cjmfr" WorkloadEndpoint="ci--4487--0--0--n--1c2c5ddea4-k8s-coredns--674b8bbfcf--cjmfr-eth0" Nov 4 23:56:45.179143 containerd[1627]: time="2025-11-04T23:56:45.179102748Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:45.181509 containerd[1627]: time="2025-11-04T23:56:45.181158300Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 23:56:45.181509 containerd[1627]: time="2025-11-04T23:56:45.181348505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 4 23:56:45.182196 kubelet[2826]: E1104 23:56:45.182132 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:56:45.182196 kubelet[2826]: E1104 23:56:45.182189 2826 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:56:45.183605 kubelet[2826]: E1104 23:56:45.182453 2826 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bhjtq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pxcr4_calico-system(13dd94ef-4602-4fcf-b36d-5a6661064a5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:45.184159 kubelet[2826]: E1104 23:56:45.184031 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pxcr4" podUID="13dd94ef-4602-4fcf-b36d-5a6661064a5d" Nov 4 23:56:45.184456 containerd[1627]: time="2025-11-04T23:56:45.184430284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:56:45.204613 kubelet[2826]: E1104 23:56:45.204501 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pxcr4" podUID="13dd94ef-4602-4fcf-b36d-5a6661064a5d" Nov 4 23:56:45.207347 containerd[1627]: time="2025-11-04T23:56:45.206964756Z" level=info msg="connecting to shim ceb46b8149434de54752b5f2267d4d0093b13bf044f0a774e0d025ee7ec575ab" address="unix:///run/containerd/s/ea95f98c7f5ea187309a89262b5f3559fd8a97f5185e439afb9cd4038f9c48c3" namespace=k8s.io protocol=ttrpc version=3 Nov 4 23:56:45.212087 kubelet[2826]: E1104 23:56:45.212061 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-677647d65b-8x7v5" podUID="931996dd-fd1b-4af9-a724-280dd54dbe3b" Nov 4 23:56:45.213487 kubelet[2826]: E1104 23:56:45.213369 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4ms7h" podUID="380d0997-b155-4f76-994b-7e2911c8cbf8" Nov 4 23:56:45.248741 systemd[1]: Started cri-containerd-ceb46b8149434de54752b5f2267d4d0093b13bf044f0a774e0d025ee7ec575ab.scope - libcontainer container ceb46b8149434de54752b5f2267d4d0093b13bf044f0a774e0d025ee7ec575ab. Nov 4 23:56:45.321039 containerd[1627]: time="2025-11-04T23:56:45.320950659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cjmfr,Uid:2dd6ec4e-4d6c-4ec9-b118-6c9da0be2ab2,Namespace:kube-system,Attempt:0,} returns sandbox id \"ceb46b8149434de54752b5f2267d4d0093b13bf044f0a774e0d025ee7ec575ab\"" Nov 4 23:56:45.326297 containerd[1627]: time="2025-11-04T23:56:45.326274899Z" level=info msg="CreateContainer within sandbox \"ceb46b8149434de54752b5f2267d4d0093b13bf044f0a774e0d025ee7ec575ab\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 23:56:45.341355 containerd[1627]: time="2025-11-04T23:56:45.340922913Z" level=info msg="Container a4505b5376c8d2eec6c452ac6292e533a9cb4bb5fe3f3d54e54416016f1b24b9: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:56:45.346263 containerd[1627]: time="2025-11-04T23:56:45.346234329Z" level=info msg="CreateContainer within sandbox \"ceb46b8149434de54752b5f2267d4d0093b13bf044f0a774e0d025ee7ec575ab\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a4505b5376c8d2eec6c452ac6292e533a9cb4bb5fe3f3d54e54416016f1b24b9\"" Nov 4 23:56:45.347113 containerd[1627]: time="2025-11-04T23:56:45.347081381Z" level=info msg="StartContainer for \"a4505b5376c8d2eec6c452ac6292e533a9cb4bb5fe3f3d54e54416016f1b24b9\"" Nov 4 23:56:45.347888 containerd[1627]: time="2025-11-04T23:56:45.347847724Z" level=info msg="connecting to shim a4505b5376c8d2eec6c452ac6292e533a9cb4bb5fe3f3d54e54416016f1b24b9" address="unix:///run/containerd/s/ea95f98c7f5ea187309a89262b5f3559fd8a97f5185e439afb9cd4038f9c48c3" protocol=ttrpc version=3 Nov 4 23:56:45.361710 systemd[1]: Started cri-containerd-a4505b5376c8d2eec6c452ac6292e533a9cb4bb5fe3f3d54e54416016f1b24b9.scope - libcontainer container a4505b5376c8d2eec6c452ac6292e533a9cb4bb5fe3f3d54e54416016f1b24b9. Nov 4 23:56:45.385635 containerd[1627]: time="2025-11-04T23:56:45.385591558Z" level=info msg="StartContainer for \"a4505b5376c8d2eec6c452ac6292e533a9cb4bb5fe3f3d54e54416016f1b24b9\" returns successfully" Nov 4 23:56:45.497652 systemd-networkd[1526]: calicbdba4d5a1c: Gained IPv6LL Nov 4 23:56:45.632948 containerd[1627]: time="2025-11-04T23:56:45.632907803Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:45.637004 containerd[1627]: time="2025-11-04T23:56:45.636958393Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:56:45.637101 containerd[1627]: time="2025-11-04T23:56:45.637059041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:56:45.637747 kubelet[2826]: E1104 23:56:45.637664 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:56:45.637835 kubelet[2826]: E1104 23:56:45.637751 2826 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:56:45.637994 kubelet[2826]: E1104 23:56:45.637934 2826 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8mq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-677647d65b-vprpk_calico-apiserver(2f84ec82-93eb-45f7-9d50-0aaa6ffeaaaa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:45.639412 kubelet[2826]: E1104 23:56:45.639370 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-677647d65b-vprpk" podUID="2f84ec82-93eb-45f7-9d50-0aaa6ffeaaaa" Nov 4 23:56:45.945770 systemd-networkd[1526]: cali7d717ee47d1: Gained IPv6LL Nov 4 23:56:46.052113 systemd-networkd[1526]: vxlan.calico: Link UP Nov 4 23:56:46.052123 systemd-networkd[1526]: vxlan.calico: Gained carrier Nov 4 23:56:46.219428 kubelet[2826]: E1104 23:56:46.219317 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-677647d65b-vprpk" podUID="2f84ec82-93eb-45f7-9d50-0aaa6ffeaaaa" Nov 4 23:56:46.220110 kubelet[2826]: E1104 23:56:46.219444 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pxcr4" podUID="13dd94ef-4602-4fcf-b36d-5a6661064a5d" Nov 4 23:56:46.268631 kubelet[2826]: I1104 23:56:46.268582 2826 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-cjmfr" podStartSLOduration=40.268567223 podStartE2EDuration="40.268567223s" podCreationTimestamp="2025-11-04 23:56:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 23:56:46.257208304 +0000 UTC m=+45.407947418" watchObservedRunningTime="2025-11-04 23:56:46.268567223 +0000 UTC m=+45.419306337" Nov 4 23:56:46.778925 systemd-networkd[1526]: cali5987dec134c: Gained IPv6LL Nov 4 23:56:47.097643 systemd-networkd[1526]: vxlan.calico: Gained IPv6LL Nov 4 23:56:48.970187 containerd[1627]: time="2025-11-04T23:56:48.969713464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 23:56:49.389079 containerd[1627]: time="2025-11-04T23:56:49.388905455Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:49.390631 containerd[1627]: time="2025-11-04T23:56:49.390523861Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 23:56:49.390886 containerd[1627]: time="2025-11-04T23:56:49.390626553Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 4 23:56:49.391011 kubelet[2826]: E1104 23:56:49.390899 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:56:49.391612 kubelet[2826]: E1104 23:56:49.391006 2826 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:56:49.391612 kubelet[2826]: E1104 23:56:49.391352 2826 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a1a393e6146d446cbeef71df668a987a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bv9mg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79bffcdf98-xwjrq_calico-system(8bf957df-2cff-497d-912f-bd9de450b664): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:49.397640 containerd[1627]: time="2025-11-04T23:56:49.397531351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 23:56:49.832264 containerd[1627]: time="2025-11-04T23:56:49.832162540Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:49.833356 containerd[1627]: time="2025-11-04T23:56:49.833296099Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 23:56:49.833549 containerd[1627]: time="2025-11-04T23:56:49.833376259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 4 23:56:49.833616 kubelet[2826]: E1104 23:56:49.833525 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:56:49.833616 kubelet[2826]: E1104 23:56:49.833566 2826 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:56:49.833726 kubelet[2826]: E1104 23:56:49.833677 2826 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bv9mg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79bffcdf98-xwjrq_calico-system(8bf957df-2cff-497d-912f-bd9de450b664): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:49.835162 kubelet[2826]: E1104 23:56:49.835115 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79bffcdf98-xwjrq" podUID="8bf957df-2cff-497d-912f-bd9de450b664" Nov 4 23:56:54.967932 containerd[1627]: time="2025-11-04T23:56:54.967794278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 23:56:55.385334 systemd[1]: Started sshd@7-46.62.221.150:22-122.187.116.62:53198.service - OpenSSH per-connection server daemon (122.187.116.62:53198). Nov 4 23:56:55.397371 containerd[1627]: time="2025-11-04T23:56:55.397137826Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:56:55.398569 containerd[1627]: time="2025-11-04T23:56:55.398521375Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 23:56:55.399034 containerd[1627]: time="2025-11-04T23:56:55.398717772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 4 23:56:55.399144 kubelet[2826]: E1104 23:56:55.399053 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:56:55.399144 kubelet[2826]: E1104 23:56:55.399118 2826 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:56:55.399854 kubelet[2826]: E1104 23:56:55.399358 2826 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h6nn7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7b747dc6fd-58ml9_calico-system(af60ecf0-185c-46c3-9aed-e5c51cd74bb3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 23:56:55.401548 kubelet[2826]: E1104 23:56:55.401493 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b747dc6fd-58ml9" podUID="af60ecf0-185c-46c3-9aed-e5c51cd74bb3" Nov 4 23:56:59.493561 sshd-session[4963]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=122.187.116.62 user=root Nov 4 23:56:59.965737 containerd[1627]: time="2025-11-04T23:56:59.965498489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:57:00.563099 containerd[1627]: time="2025-11-04T23:57:00.563045245Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:57:00.564379 containerd[1627]: time="2025-11-04T23:57:00.564271850Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:57:00.564379 containerd[1627]: time="2025-11-04T23:57:00.564310041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:57:00.565132 containerd[1627]: time="2025-11-04T23:57:00.564980425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 23:57:00.565376 kubelet[2826]: E1104 23:57:00.564590 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:57:00.565376 kubelet[2826]: E1104 23:57:00.564642 2826 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:57:00.565883 kubelet[2826]: E1104 23:57:00.565254 2826 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6w8gg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-677647d65b-8x7v5_calico-apiserver(931996dd-fd1b-4af9-a724-280dd54dbe3b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:57:00.567008 kubelet[2826]: E1104 23:57:00.566963 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-677647d65b-8x7v5" podUID="931996dd-fd1b-4af9-a724-280dd54dbe3b" Nov 4 23:57:00.985514 containerd[1627]: time="2025-11-04T23:57:00.985327291Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:57:00.986712 containerd[1627]: time="2025-11-04T23:57:00.986675512Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 23:57:00.986788 containerd[1627]: time="2025-11-04T23:57:00.986748770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 4 23:57:00.986972 kubelet[2826]: E1104 23:57:00.986936 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:57:00.987107 kubelet[2826]: E1104 23:57:00.987058 2826 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:57:00.987670 kubelet[2826]: E1104 23:57:00.987253 2826 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t46m6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4ms7h_calico-system(380d0997-b155-4f76-994b-7e2911c8cbf8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 23:57:00.987776 containerd[1627]: time="2025-11-04T23:57:00.987326339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 23:57:01.425814 containerd[1627]: time="2025-11-04T23:57:01.425746645Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:57:01.427212 containerd[1627]: time="2025-11-04T23:57:01.427161462Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 23:57:01.427530 containerd[1627]: time="2025-11-04T23:57:01.427266458Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 4 23:57:01.427813 kubelet[2826]: E1104 23:57:01.427735 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:57:01.427932 kubelet[2826]: E1104 23:57:01.427815 2826 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:57:01.428529 containerd[1627]: time="2025-11-04T23:57:01.428271288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 23:57:01.428969 kubelet[2826]: E1104 23:57:01.428845 2826 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bhjtq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pxcr4_calico-system(13dd94ef-4602-4fcf-b36d-5a6661064a5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 23:57:01.430117 kubelet[2826]: E1104 23:57:01.430045 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pxcr4" podUID="13dd94ef-4602-4fcf-b36d-5a6661064a5d" Nov 4 23:57:01.601370 sshd[4952]: PAM: Permission denied for root from 122.187.116.62 Nov 4 23:57:01.845006 containerd[1627]: time="2025-11-04T23:57:01.844951555Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:57:01.846585 containerd[1627]: time="2025-11-04T23:57:01.846511954Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 23:57:01.846733 containerd[1627]: time="2025-11-04T23:57:01.846611840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 4 23:57:01.846806 kubelet[2826]: E1104 23:57:01.846740 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:57:01.846806 kubelet[2826]: E1104 23:57:01.846790 2826 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:57:01.847328 kubelet[2826]: E1104 23:57:01.846936 2826 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t46m6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4ms7h_calico-system(380d0997-b155-4f76-994b-7e2911c8cbf8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 23:57:01.848231 kubelet[2826]: E1104 23:57:01.848179 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4ms7h" podUID="380d0997-b155-4f76-994b-7e2911c8cbf8" Nov 4 23:57:01.966436 containerd[1627]: time="2025-11-04T23:57:01.965732486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:57:02.049903 sshd[4952]: Connection closed by authenticating user root 122.187.116.62 port 53198 [preauth] Nov 4 23:57:02.053352 systemd[1]: sshd@7-46.62.221.150:22-122.187.116.62:53198.service: Deactivated successfully. Nov 4 23:57:02.414434 containerd[1627]: time="2025-11-04T23:57:02.414373311Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:57:02.415626 containerd[1627]: time="2025-11-04T23:57:02.415556915Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:57:02.415767 containerd[1627]: time="2025-11-04T23:57:02.415597060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:57:02.415855 kubelet[2826]: E1104 23:57:02.415813 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:57:02.415936 kubelet[2826]: E1104 23:57:02.415866 2826 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:57:02.416034 kubelet[2826]: E1104 23:57:02.415987 2826 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8mq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-677647d65b-vprpk_calico-apiserver(2f84ec82-93eb-45f7-9d50-0aaa6ffeaaaa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:57:02.417535 kubelet[2826]: E1104 23:57:02.417482 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-677647d65b-vprpk" podUID="2f84ec82-93eb-45f7-9d50-0aaa6ffeaaaa" Nov 4 23:57:03.966864 kubelet[2826]: E1104 23:57:03.966675 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79bffcdf98-xwjrq" podUID="8bf957df-2cff-497d-912f-bd9de450b664" Nov 4 23:57:06.279539 containerd[1627]: time="2025-11-04T23:57:06.279431126Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8ca26977d6a5c9ca0214b35c97a6596546938c85b1e66f8c91991295a17af949\" id:\"b96c7bd6a7a8d2ff2c799660de067bb6fe56b6457a3b4e4ff11f93fa619167cf\" pid:4981 exited_at:{seconds:1762300626 nanos:278684550}" Nov 4 23:57:08.971017 kubelet[2826]: E1104 23:57:08.970673 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b747dc6fd-58ml9" podUID="af60ecf0-185c-46c3-9aed-e5c51cd74bb3" Nov 4 23:57:13.966225 kubelet[2826]: E1104 23:57:13.966148 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-677647d65b-8x7v5" podUID="931996dd-fd1b-4af9-a724-280dd54dbe3b" Nov 4 23:57:13.967774 kubelet[2826]: E1104 23:57:13.966742 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-677647d65b-vprpk" podUID="2f84ec82-93eb-45f7-9d50-0aaa6ffeaaaa" Nov 4 23:57:14.971360 kubelet[2826]: E1104 23:57:14.971303 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4ms7h" podUID="380d0997-b155-4f76-994b-7e2911c8cbf8" Nov 4 23:57:14.975215 containerd[1627]: time="2025-11-04T23:57:14.975146406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 23:57:15.588794 containerd[1627]: time="2025-11-04T23:57:15.588698236Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:57:15.590856 containerd[1627]: time="2025-11-04T23:57:15.590646759Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 23:57:15.591130 containerd[1627]: time="2025-11-04T23:57:15.591062317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 4 23:57:15.591770 kubelet[2826]: E1104 23:57:15.591647 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:57:15.591932 kubelet[2826]: E1104 23:57:15.591784 2826 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:57:15.592946 kubelet[2826]: E1104 23:57:15.592277 2826 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a1a393e6146d446cbeef71df668a987a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bv9mg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79bffcdf98-xwjrq_calico-system(8bf957df-2cff-497d-912f-bd9de450b664): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 23:57:15.596386 containerd[1627]: time="2025-11-04T23:57:15.596006549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 23:57:15.965008 kubelet[2826]: E1104 23:57:15.964851 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pxcr4" podUID="13dd94ef-4602-4fcf-b36d-5a6661064a5d" Nov 4 23:57:16.047519 containerd[1627]: time="2025-11-04T23:57:16.047448493Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:57:16.049642 containerd[1627]: time="2025-11-04T23:57:16.048445473Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 23:57:16.049642 containerd[1627]: time="2025-11-04T23:57:16.049501949Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 4 23:57:16.049789 kubelet[2826]: E1104 23:57:16.049737 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:57:16.050050 kubelet[2826]: E1104 23:57:16.049797 2826 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:57:16.050050 kubelet[2826]: E1104 23:57:16.049912 2826 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bv9mg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79bffcdf98-xwjrq_calico-system(8bf957df-2cff-497d-912f-bd9de450b664): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 23:57:16.051010 kubelet[2826]: E1104 23:57:16.050981 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79bffcdf98-xwjrq" podUID="8bf957df-2cff-497d-912f-bd9de450b664" Nov 4 23:57:22.968465 containerd[1627]: time="2025-11-04T23:57:22.968259876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 23:57:23.395625 containerd[1627]: time="2025-11-04T23:57:23.395423259Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:57:23.397046 containerd[1627]: time="2025-11-04T23:57:23.397013348Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 23:57:23.397247 containerd[1627]: time="2025-11-04T23:57:23.397161882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 4 23:57:23.397713 kubelet[2826]: E1104 23:57:23.397494 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:57:23.397713 kubelet[2826]: E1104 23:57:23.397540 2826 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:57:23.397713 kubelet[2826]: E1104 23:57:23.397666 2826 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h6nn7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7b747dc6fd-58ml9_calico-system(af60ecf0-185c-46c3-9aed-e5c51cd74bb3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 23:57:23.399638 kubelet[2826]: E1104 23:57:23.399545 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b747dc6fd-58ml9" podUID="af60ecf0-185c-46c3-9aed-e5c51cd74bb3" Nov 4 23:57:24.976408 containerd[1627]: time="2025-11-04T23:57:24.976183674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:57:25.415661 containerd[1627]: time="2025-11-04T23:57:25.415610258Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:57:25.417206 containerd[1627]: time="2025-11-04T23:57:25.417148285Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:57:25.417438 containerd[1627]: time="2025-11-04T23:57:25.417307671Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:57:25.417930 kubelet[2826]: E1104 23:57:25.417721 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:57:25.417930 kubelet[2826]: E1104 23:57:25.417770 2826 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:57:25.418270 kubelet[2826]: E1104 23:57:25.417908 2826 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6w8gg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-677647d65b-8x7v5_calico-apiserver(931996dd-fd1b-4af9-a724-280dd54dbe3b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:57:25.419391 kubelet[2826]: E1104 23:57:25.419363 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-677647d65b-8x7v5" podUID="931996dd-fd1b-4af9-a724-280dd54dbe3b" Nov 4 23:57:27.969029 kubelet[2826]: E1104 23:57:27.968979 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79bffcdf98-xwjrq" podUID="8bf957df-2cff-497d-912f-bd9de450b664" Nov 4 23:57:28.967490 containerd[1627]: time="2025-11-04T23:57:28.966131967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 23:57:29.408650 containerd[1627]: time="2025-11-04T23:57:29.408070771Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:57:29.409632 containerd[1627]: time="2025-11-04T23:57:29.409584071Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 23:57:29.410509 containerd[1627]: time="2025-11-04T23:57:29.409719435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 4 23:57:29.410547 kubelet[2826]: E1104 23:57:29.409944 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:57:29.410547 kubelet[2826]: E1104 23:57:29.409985 2826 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:57:29.410547 kubelet[2826]: E1104 23:57:29.410179 2826 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t46m6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4ms7h_calico-system(380d0997-b155-4f76-994b-7e2911c8cbf8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 23:57:29.411932 containerd[1627]: time="2025-11-04T23:57:29.411729732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:57:29.845045 containerd[1627]: time="2025-11-04T23:57:29.844947876Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:57:29.846742 containerd[1627]: time="2025-11-04T23:57:29.846561269Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:57:29.846742 containerd[1627]: time="2025-11-04T23:57:29.846701403Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:57:29.847172 kubelet[2826]: E1104 23:57:29.847077 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:57:29.847172 kubelet[2826]: E1104 23:57:29.847147 2826 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:57:29.847674 kubelet[2826]: E1104 23:57:29.847508 2826 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8mq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-677647d65b-vprpk_calico-apiserver(2f84ec82-93eb-45f7-9d50-0aaa6ffeaaaa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:57:29.851233 kubelet[2826]: E1104 23:57:29.848822 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-677647d65b-vprpk" podUID="2f84ec82-93eb-45f7-9d50-0aaa6ffeaaaa" Nov 4 23:57:29.851378 containerd[1627]: time="2025-11-04T23:57:29.847943484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 23:57:30.287123 containerd[1627]: time="2025-11-04T23:57:30.287077108Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:57:30.288635 containerd[1627]: time="2025-11-04T23:57:30.288402513Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 23:57:30.288635 containerd[1627]: time="2025-11-04T23:57:30.288460702Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 4 23:57:30.288794 kubelet[2826]: E1104 23:57:30.288764 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:57:30.288863 kubelet[2826]: E1104 23:57:30.288809 2826 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:57:30.289546 kubelet[2826]: E1104 23:57:30.288998 2826 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t46m6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4ms7h_calico-system(380d0997-b155-4f76-994b-7e2911c8cbf8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 23:57:30.290802 containerd[1627]: time="2025-11-04T23:57:30.290181315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 23:57:30.291019 kubelet[2826]: E1104 23:57:30.290740 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4ms7h" podUID="380d0997-b155-4f76-994b-7e2911c8cbf8" Nov 4 23:57:30.719459 containerd[1627]: time="2025-11-04T23:57:30.719309196Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:57:30.721151 containerd[1627]: time="2025-11-04T23:57:30.721096986Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 23:57:30.721405 containerd[1627]: time="2025-11-04T23:57:30.721238941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 4 23:57:30.721784 kubelet[2826]: E1104 23:57:30.721682 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:57:30.722027 kubelet[2826]: E1104 23:57:30.721825 2826 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:57:30.727207 kubelet[2826]: E1104 23:57:30.727016 2826 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bhjtq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pxcr4_calico-system(13dd94ef-4602-4fcf-b36d-5a6661064a5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 23:57:30.728397 kubelet[2826]: E1104 23:57:30.728351 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pxcr4" podUID="13dd94ef-4602-4fcf-b36d-5a6661064a5d" Nov 4 23:57:36.280747 containerd[1627]: time="2025-11-04T23:57:36.280661093Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8ca26977d6a5c9ca0214b35c97a6596546938c85b1e66f8c91991295a17af949\" id:\"5f1d866d60cd1989c41e85e4082efd58e95e248fe10982c2eb8d0b613d94156c\" pid:5023 exited_at:{seconds:1762300656 nanos:280262096}" Nov 4 23:57:38.969615 kubelet[2826]: E1104 23:57:38.968541 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-677647d65b-8x7v5" podUID="931996dd-fd1b-4af9-a724-280dd54dbe3b" Nov 4 23:57:38.978055 kubelet[2826]: E1104 23:57:38.977935 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b747dc6fd-58ml9" podUID="af60ecf0-185c-46c3-9aed-e5c51cd74bb3" Nov 4 23:57:40.987755 kubelet[2826]: E1104 23:57:40.986889 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4ms7h" podUID="380d0997-b155-4f76-994b-7e2911c8cbf8" Nov 4 23:57:40.989605 kubelet[2826]: E1104 23:57:40.988558 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79bffcdf98-xwjrq" podUID="8bf957df-2cff-497d-912f-bd9de450b664" Nov 4 23:57:43.965518 kubelet[2826]: E1104 23:57:43.965444 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-677647d65b-vprpk" podUID="2f84ec82-93eb-45f7-9d50-0aaa6ffeaaaa" Nov 4 23:57:45.965511 kubelet[2826]: E1104 23:57:45.965261 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pxcr4" podUID="13dd94ef-4602-4fcf-b36d-5a6661064a5d" Nov 4 23:57:49.332631 systemd[1]: Started sshd@8-46.62.221.150:22-147.75.109.163:46570.service - OpenSSH per-connection server daemon (147.75.109.163:46570). Nov 4 23:57:50.439722 sshd[5047]: Accepted publickey for core from 147.75.109.163 port 46570 ssh2: RSA SHA256:2nrKbouutsjVTOVoe49KHBpiak20yOpS7DxBbxhQEyE Nov 4 23:57:50.445311 sshd-session[5047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:57:50.453809 systemd-logind[1604]: New session 8 of user core. Nov 4 23:57:50.458615 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 4 23:57:51.718336 sshd[5050]: Connection closed by 147.75.109.163 port 46570 Nov 4 23:57:51.719791 sshd-session[5047]: pam_unix(sshd:session): session closed for user core Nov 4 23:57:51.727559 systemd-logind[1604]: Session 8 logged out. Waiting for processes to exit. Nov 4 23:57:51.728872 systemd[1]: sshd@8-46.62.221.150:22-147.75.109.163:46570.service: Deactivated successfully. Nov 4 23:57:51.732230 systemd[1]: session-8.scope: Deactivated successfully. Nov 4 23:57:51.735848 systemd-logind[1604]: Removed session 8. Nov 4 23:57:51.967731 kubelet[2826]: E1104 23:57:51.967663 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4ms7h" podUID="380d0997-b155-4f76-994b-7e2911c8cbf8" Nov 4 23:57:52.967821 kubelet[2826]: E1104 23:57:52.967743 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b747dc6fd-58ml9" podUID="af60ecf0-185c-46c3-9aed-e5c51cd74bb3" Nov 4 23:57:53.967644 kubelet[2826]: E1104 23:57:53.967337 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-677647d65b-8x7v5" podUID="931996dd-fd1b-4af9-a724-280dd54dbe3b" Nov 4 23:57:53.969385 kubelet[2826]: E1104 23:57:53.969316 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79bffcdf98-xwjrq" podUID="8bf957df-2cff-497d-912f-bd9de450b664" Nov 4 23:57:54.114125 update_engine[1608]: I20251104 23:57:54.114044 1608 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 4 23:57:54.115391 update_engine[1608]: I20251104 23:57:54.114768 1608 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 4 23:57:54.117511 update_engine[1608]: I20251104 23:57:54.117134 1608 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 4 23:57:54.117736 update_engine[1608]: I20251104 23:57:54.117713 1608 omaha_request_params.cc:62] Current group set to alpha Nov 4 23:57:54.117941 update_engine[1608]: I20251104 23:57:54.117919 1608 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 4 23:57:54.118011 update_engine[1608]: I20251104 23:57:54.117995 1608 update_attempter.cc:643] Scheduling an action processor start. Nov 4 23:57:54.118149 update_engine[1608]: I20251104 23:57:54.118112 1608 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 4 23:57:54.118270 update_engine[1608]: I20251104 23:57:54.118252 1608 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 4 23:57:54.118410 update_engine[1608]: I20251104 23:57:54.118388 1608 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 4 23:57:54.118514 update_engine[1608]: I20251104 23:57:54.118463 1608 omaha_request_action.cc:272] Request: Nov 4 23:57:54.118514 update_engine[1608]: Nov 4 23:57:54.118514 update_engine[1608]: Nov 4 23:57:54.118514 update_engine[1608]: Nov 4 23:57:54.118514 update_engine[1608]: Nov 4 23:57:54.118514 update_engine[1608]: Nov 4 23:57:54.118514 update_engine[1608]: Nov 4 23:57:54.118514 update_engine[1608]: Nov 4 23:57:54.118514 update_engine[1608]: Nov 4 23:57:54.120228 update_engine[1608]: I20251104 23:57:54.118638 1608 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 4 23:57:54.150527 update_engine[1608]: I20251104 23:57:54.149940 1608 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 4 23:57:54.158016 update_engine[1608]: I20251104 23:57:54.157961 1608 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 4 23:57:54.158353 update_engine[1608]: E20251104 23:57:54.158309 1608 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Nov 4 23:57:54.158429 update_engine[1608]: I20251104 23:57:54.158381 1608 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 4 23:57:54.158469 locksmithd[1668]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 4 23:57:54.967215 kubelet[2826]: E1104 23:57:54.967165 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-677647d65b-vprpk" podUID="2f84ec82-93eb-45f7-9d50-0aaa6ffeaaaa" Nov 4 23:57:56.901821 systemd[1]: Started sshd@9-46.62.221.150:22-147.75.109.163:51002.service - OpenSSH per-connection server daemon (147.75.109.163:51002). Nov 4 23:57:57.966837 kubelet[2826]: E1104 23:57:57.966787 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pxcr4" podUID="13dd94ef-4602-4fcf-b36d-5a6661064a5d" Nov 4 23:57:57.986168 sshd[5064]: Accepted publickey for core from 147.75.109.163 port 51002 ssh2: RSA SHA256:2nrKbouutsjVTOVoe49KHBpiak20yOpS7DxBbxhQEyE Nov 4 23:57:57.988228 sshd-session[5064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:57:57.995286 systemd-logind[1604]: New session 9 of user core. Nov 4 23:57:58.000772 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 4 23:57:58.905029 sshd[5067]: Connection closed by 147.75.109.163 port 51002 Nov 4 23:57:58.904102 sshd-session[5064]: pam_unix(sshd:session): session closed for user core Nov 4 23:57:58.913637 systemd-logind[1604]: Session 9 logged out. Waiting for processes to exit. Nov 4 23:57:58.914441 systemd[1]: sshd@9-46.62.221.150:22-147.75.109.163:51002.service: Deactivated successfully. Nov 4 23:57:58.921844 systemd[1]: session-9.scope: Deactivated successfully. Nov 4 23:57:58.927061 systemd-logind[1604]: Removed session 9. Nov 4 23:57:59.076234 systemd[1]: Started sshd@10-46.62.221.150:22-147.75.109.163:51008.service - OpenSSH per-connection server daemon (147.75.109.163:51008). Nov 4 23:58:00.111302 sshd[5080]: Accepted publickey for core from 147.75.109.163 port 51008 ssh2: RSA SHA256:2nrKbouutsjVTOVoe49KHBpiak20yOpS7DxBbxhQEyE Nov 4 23:58:00.115384 sshd-session[5080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:58:00.126557 systemd-logind[1604]: New session 10 of user core. Nov 4 23:58:00.131725 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 4 23:58:01.058361 sshd[5083]: Connection closed by 147.75.109.163 port 51008 Nov 4 23:58:01.058550 sshd-session[5080]: pam_unix(sshd:session): session closed for user core Nov 4 23:58:01.069733 systemd[1]: sshd@10-46.62.221.150:22-147.75.109.163:51008.service: Deactivated successfully. Nov 4 23:58:01.075722 systemd[1]: session-10.scope: Deactivated successfully. Nov 4 23:58:01.078191 systemd-logind[1604]: Session 10 logged out. Waiting for processes to exit. Nov 4 23:58:01.082853 systemd-logind[1604]: Removed session 10. Nov 4 23:58:01.227145 systemd[1]: Started sshd@11-46.62.221.150:22-147.75.109.163:58294.service - OpenSSH per-connection server daemon (147.75.109.163:58294). Nov 4 23:58:02.266027 sshd[5095]: Accepted publickey for core from 147.75.109.163 port 58294 ssh2: RSA SHA256:2nrKbouutsjVTOVoe49KHBpiak20yOpS7DxBbxhQEyE Nov 4 23:58:02.267774 sshd-session[5095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:58:02.272924 systemd-logind[1604]: New session 11 of user core. Nov 4 23:58:02.276605 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 4 23:58:03.081680 sshd[5102]: Connection closed by 147.75.109.163 port 58294 Nov 4 23:58:03.082292 sshd-session[5095]: pam_unix(sshd:session): session closed for user core Nov 4 23:58:03.088603 systemd[1]: sshd@11-46.62.221.150:22-147.75.109.163:58294.service: Deactivated successfully. Nov 4 23:58:03.091397 systemd[1]: session-11.scope: Deactivated successfully. Nov 4 23:58:03.094296 systemd-logind[1604]: Session 11 logged out. Waiting for processes to exit. Nov 4 23:58:03.099368 systemd-logind[1604]: Removed session 11. Nov 4 23:58:03.965430 containerd[1627]: time="2025-11-04T23:58:03.965057046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 4 23:58:04.039518 update_engine[1608]: I20251104 23:58:04.038539 1608 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 4 23:58:04.039518 update_engine[1608]: I20251104 23:58:04.038663 1608 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 4 23:58:04.039518 update_engine[1608]: I20251104 23:58:04.039225 1608 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 4 23:58:04.040624 update_engine[1608]: E20251104 23:58:04.040593 1608 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Nov 4 23:58:04.042595 update_engine[1608]: I20251104 23:58:04.042563 1608 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Nov 4 23:58:04.416714 containerd[1627]: time="2025-11-04T23:58:04.416655251Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:58:04.418298 containerd[1627]: time="2025-11-04T23:58:04.418194102Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 4 23:58:04.418405 containerd[1627]: time="2025-11-04T23:58:04.418352820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 4 23:58:04.418642 kubelet[2826]: E1104 23:58:04.418596 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:58:04.420778 kubelet[2826]: E1104 23:58:04.418656 2826 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 4 23:58:04.420778 kubelet[2826]: E1104 23:58:04.419455 2826 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h6nn7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7b747dc6fd-58ml9_calico-system(af60ecf0-185c-46c3-9aed-e5c51cd74bb3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 4 23:58:04.422144 kubelet[2826]: E1104 23:58:04.422064 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b747dc6fd-58ml9" podUID="af60ecf0-185c-46c3-9aed-e5c51cd74bb3" Nov 4 23:58:04.971061 kubelet[2826]: E1104 23:58:04.970994 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4ms7h" podUID="380d0997-b155-4f76-994b-7e2911c8cbf8" Nov 4 23:58:06.279273 containerd[1627]: time="2025-11-04T23:58:06.279225750Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8ca26977d6a5c9ca0214b35c97a6596546938c85b1e66f8c91991295a17af949\" id:\"aace0c9a290f3417d34ef846fd309ea39cb5113b1609e9731296ead65646dca7\" pid:5125 exit_status:1 exited_at:{seconds:1762300686 nanos:278529314}" Nov 4 23:58:07.967124 containerd[1627]: time="2025-11-04T23:58:07.966863752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 4 23:58:08.258669 systemd[1]: Started sshd@12-46.62.221.150:22-147.75.109.163:58304.service - OpenSSH per-connection server daemon (147.75.109.163:58304). Nov 4 23:58:08.404691 containerd[1627]: time="2025-11-04T23:58:08.404606172Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:58:08.405685 containerd[1627]: time="2025-11-04T23:58:08.405640061Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 4 23:58:08.405775 containerd[1627]: time="2025-11-04T23:58:08.405722220Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 4 23:58:08.405966 kubelet[2826]: E1104 23:58:08.405906 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:58:08.406336 kubelet[2826]: E1104 23:58:08.405971 2826 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 4 23:58:08.406336 kubelet[2826]: E1104 23:58:08.406100 2826 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a1a393e6146d446cbeef71df668a987a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bv9mg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79bffcdf98-xwjrq_calico-system(8bf957df-2cff-497d-912f-bd9de450b664): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 4 23:58:08.408800 containerd[1627]: time="2025-11-04T23:58:08.408762956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 4 23:58:08.833663 containerd[1627]: time="2025-11-04T23:58:08.833582452Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:58:08.835882 containerd[1627]: time="2025-11-04T23:58:08.835651470Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 4 23:58:08.836293 kubelet[2826]: E1104 23:58:08.836235 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:58:08.836293 kubelet[2826]: E1104 23:58:08.836302 2826 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 4 23:58:08.840189 kubelet[2826]: E1104 23:58:08.836567 2826 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bv9mg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79bffcdf98-xwjrq_calico-system(8bf957df-2cff-497d-912f-bd9de450b664): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 4 23:58:08.840574 kubelet[2826]: E1104 23:58:08.840526 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79bffcdf98-xwjrq" podUID="8bf957df-2cff-497d-912f-bd9de450b664" Nov 4 23:58:08.900765 containerd[1627]: time="2025-11-04T23:58:08.835763477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 4 23:58:08.966359 kubelet[2826]: E1104 23:58:08.966321 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-677647d65b-vprpk" podUID="2f84ec82-93eb-45f7-9d50-0aaa6ffeaaaa" Nov 4 23:58:08.967008 containerd[1627]: time="2025-11-04T23:58:08.966624486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:58:09.328546 sshd[5147]: Accepted publickey for core from 147.75.109.163 port 58304 ssh2: RSA SHA256:2nrKbouutsjVTOVoe49KHBpiak20yOpS7DxBbxhQEyE Nov 4 23:58:09.329088 sshd-session[5147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:58:09.339499 systemd-logind[1604]: New session 12 of user core. Nov 4 23:58:09.344591 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 4 23:58:09.405024 containerd[1627]: time="2025-11-04T23:58:09.404969575Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:58:09.406607 containerd[1627]: time="2025-11-04T23:58:09.406573543Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:58:09.406695 containerd[1627]: time="2025-11-04T23:58:09.406648399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:58:09.408713 kubelet[2826]: E1104 23:58:09.408652 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:58:09.408713 kubelet[2826]: E1104 23:58:09.408700 2826 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:58:09.409407 kubelet[2826]: E1104 23:58:09.408813 2826 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6w8gg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-677647d65b-8x7v5_calico-apiserver(931996dd-fd1b-4af9-a724-280dd54dbe3b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:58:09.409974 kubelet[2826]: E1104 23:58:09.409938 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-677647d65b-8x7v5" podUID="931996dd-fd1b-4af9-a724-280dd54dbe3b" Nov 4 23:58:09.968943 kubelet[2826]: E1104 23:58:09.968288 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pxcr4" podUID="13dd94ef-4602-4fcf-b36d-5a6661064a5d" Nov 4 23:58:10.251744 sshd[5150]: Connection closed by 147.75.109.163 port 58304 Nov 4 23:58:10.253867 sshd-session[5147]: pam_unix(sshd:session): session closed for user core Nov 4 23:58:10.261254 systemd-logind[1604]: Session 12 logged out. Waiting for processes to exit. Nov 4 23:58:10.261722 systemd[1]: sshd@12-46.62.221.150:22-147.75.109.163:58304.service: Deactivated successfully. Nov 4 23:58:10.265059 systemd[1]: session-12.scope: Deactivated successfully. Nov 4 23:58:10.268665 systemd-logind[1604]: Removed session 12. Nov 4 23:58:14.044357 update_engine[1608]: I20251104 23:58:14.043502 1608 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 4 23:58:14.044357 update_engine[1608]: I20251104 23:58:14.043586 1608 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 4 23:58:14.044357 update_engine[1608]: I20251104 23:58:14.044044 1608 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 4 23:58:14.044968 update_engine[1608]: E20251104 23:58:14.044950 1608 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Nov 4 23:58:14.045063 update_engine[1608]: I20251104 23:58:14.045049 1608 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Nov 4 23:58:15.428500 systemd[1]: Started sshd@13-46.62.221.150:22-147.75.109.163:40156.service - OpenSSH per-connection server daemon (147.75.109.163:40156). Nov 4 23:58:15.966887 containerd[1627]: time="2025-11-04T23:58:15.966563349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 4 23:58:16.410198 containerd[1627]: time="2025-11-04T23:58:16.410103507Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:58:16.412008 containerd[1627]: time="2025-11-04T23:58:16.411908230Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 4 23:58:16.412130 containerd[1627]: time="2025-11-04T23:58:16.412041225Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 4 23:58:16.413040 kubelet[2826]: E1104 23:58:16.412334 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:58:16.413040 kubelet[2826]: E1104 23:58:16.412414 2826 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 4 23:58:16.413040 kubelet[2826]: E1104 23:58:16.412619 2826 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t46m6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4ms7h_calico-system(380d0997-b155-4f76-994b-7e2911c8cbf8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 4 23:58:16.415820 containerd[1627]: time="2025-11-04T23:58:16.415737007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 4 23:58:16.454532 sshd[5162]: Accepted publickey for core from 147.75.109.163 port 40156 ssh2: RSA SHA256:2nrKbouutsjVTOVoe49KHBpiak20yOpS7DxBbxhQEyE Nov 4 23:58:16.457435 sshd-session[5162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:58:16.466267 systemd-logind[1604]: New session 13 of user core. Nov 4 23:58:16.472739 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 4 23:58:16.850231 containerd[1627]: time="2025-11-04T23:58:16.850173325Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:58:16.851586 containerd[1627]: time="2025-11-04T23:58:16.851551427Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 4 23:58:16.852312 containerd[1627]: time="2025-11-04T23:58:16.851631290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 4 23:58:16.852350 kubelet[2826]: E1104 23:58:16.852027 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:58:16.852350 kubelet[2826]: E1104 23:58:16.852072 2826 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 4 23:58:16.852350 kubelet[2826]: E1104 23:58:16.852192 2826 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t46m6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4ms7h_calico-system(380d0997-b155-4f76-994b-7e2911c8cbf8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 4 23:58:16.853712 kubelet[2826]: E1104 23:58:16.853674 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4ms7h" podUID="380d0997-b155-4f76-994b-7e2911c8cbf8" Nov 4 23:58:17.230627 sshd[5165]: Connection closed by 147.75.109.163 port 40156 Nov 4 23:58:17.234006 sshd-session[5162]: pam_unix(sshd:session): session closed for user core Nov 4 23:58:17.237426 systemd[1]: sshd@13-46.62.221.150:22-147.75.109.163:40156.service: Deactivated successfully. Nov 4 23:58:17.240073 systemd[1]: session-13.scope: Deactivated successfully. Nov 4 23:58:17.241308 systemd-logind[1604]: Session 13 logged out. Waiting for processes to exit. Nov 4 23:58:17.244506 systemd-logind[1604]: Removed session 13. Nov 4 23:58:18.967247 kubelet[2826]: E1104 23:58:18.966693 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b747dc6fd-58ml9" podUID="af60ecf0-185c-46c3-9aed-e5c51cd74bb3" Nov 4 23:58:20.966064 containerd[1627]: time="2025-11-04T23:58:20.965821040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 4 23:58:21.422751 containerd[1627]: time="2025-11-04T23:58:21.422676755Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:58:21.425134 containerd[1627]: time="2025-11-04T23:58:21.425031096Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 4 23:58:21.425350 containerd[1627]: time="2025-11-04T23:58:21.425100148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 4 23:58:21.425655 kubelet[2826]: E1104 23:58:21.425573 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:58:21.426453 kubelet[2826]: E1104 23:58:21.425676 2826 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 4 23:58:21.426453 kubelet[2826]: E1104 23:58:21.425867 2826 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8mq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-677647d65b-vprpk_calico-apiserver(2f84ec82-93eb-45f7-9d50-0aaa6ffeaaaa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 4 23:58:21.428776 kubelet[2826]: E1104 23:58:21.428693 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-677647d65b-vprpk" podUID="2f84ec82-93eb-45f7-9d50-0aaa6ffeaaaa" Nov 4 23:58:22.411090 systemd[1]: Started sshd@14-46.62.221.150:22-147.75.109.163:39786.service - OpenSSH per-connection server daemon (147.75.109.163:39786). Nov 4 23:58:22.975162 containerd[1627]: time="2025-11-04T23:58:22.975111779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 4 23:58:22.979155 kubelet[2826]: E1104 23:58:22.978056 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-677647d65b-8x7v5" podUID="931996dd-fd1b-4af9-a724-280dd54dbe3b" Nov 4 23:58:22.980253 kubelet[2826]: E1104 23:58:22.979527 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79bffcdf98-xwjrq" podUID="8bf957df-2cff-497d-912f-bd9de450b664" Nov 4 23:58:23.416390 containerd[1627]: time="2025-11-04T23:58:23.416244486Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 4 23:58:23.417635 containerd[1627]: time="2025-11-04T23:58:23.417467061Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 4 23:58:23.417794 containerd[1627]: time="2025-11-04T23:58:23.417599365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 4 23:58:23.417941 kubelet[2826]: E1104 23:58:23.417885 2826 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:58:23.417941 kubelet[2826]: E1104 23:58:23.417933 2826 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 4 23:58:23.418880 kubelet[2826]: E1104 23:58:23.418070 2826 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bhjtq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pxcr4_calico-system(13dd94ef-4602-4fcf-b36d-5a6661064a5d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 4 23:58:23.420124 kubelet[2826]: E1104 23:58:23.420075 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pxcr4" podUID="13dd94ef-4602-4fcf-b36d-5a6661064a5d" Nov 4 23:58:23.428911 sshd[5198]: Accepted publickey for core from 147.75.109.163 port 39786 ssh2: RSA SHA256:2nrKbouutsjVTOVoe49KHBpiak20yOpS7DxBbxhQEyE Nov 4 23:58:23.430502 sshd-session[5198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:58:23.437825 systemd-logind[1604]: New session 14 of user core. Nov 4 23:58:23.444584 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 4 23:58:24.038559 update_engine[1608]: I20251104 23:58:24.038506 1608 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 4 23:58:24.038838 update_engine[1608]: I20251104 23:58:24.038570 1608 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 4 23:58:24.038838 update_engine[1608]: I20251104 23:58:24.038820 1608 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 4 23:58:24.039255 update_engine[1608]: E20251104 23:58:24.039224 1608 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Nov 4 23:58:24.039291 update_engine[1608]: I20251104 23:58:24.039268 1608 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 4 23:58:24.039291 update_engine[1608]: I20251104 23:58:24.039275 1608 omaha_request_action.cc:617] Omaha request response: Nov 4 23:58:24.039349 update_engine[1608]: E20251104 23:58:24.039327 1608 omaha_request_action.cc:636] Omaha request network transfer failed. Nov 4 23:58:24.045129 update_engine[1608]: I20251104 23:58:24.044104 1608 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Nov 4 23:58:24.045129 update_engine[1608]: I20251104 23:58:24.044118 1608 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 4 23:58:24.045129 update_engine[1608]: I20251104 23:58:24.044122 1608 update_attempter.cc:306] Processing Done. Nov 4 23:58:24.046511 update_engine[1608]: E20251104 23:58:24.046118 1608 update_attempter.cc:619] Update failed. Nov 4 23:58:24.047298 update_engine[1608]: I20251104 23:58:24.046567 1608 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Nov 4 23:58:24.047298 update_engine[1608]: I20251104 23:58:24.046584 1608 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Nov 4 23:58:24.047298 update_engine[1608]: I20251104 23:58:24.046591 1608 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Nov 4 23:58:24.047298 update_engine[1608]: I20251104 23:58:24.046654 1608 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 4 23:58:24.047298 update_engine[1608]: I20251104 23:58:24.046671 1608 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 4 23:58:24.047298 update_engine[1608]: I20251104 23:58:24.046674 1608 omaha_request_action.cc:272] Request: Nov 4 23:58:24.047298 update_engine[1608]: Nov 4 23:58:24.047298 update_engine[1608]: Nov 4 23:58:24.047298 update_engine[1608]: Nov 4 23:58:24.047298 update_engine[1608]: Nov 4 23:58:24.047298 update_engine[1608]: Nov 4 23:58:24.047298 update_engine[1608]: Nov 4 23:58:24.047298 update_engine[1608]: I20251104 23:58:24.046679 1608 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 4 23:58:24.047298 update_engine[1608]: I20251104 23:58:24.046693 1608 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 4 23:58:24.047298 update_engine[1608]: I20251104 23:58:24.046885 1608 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 4 23:58:24.047604 locksmithd[1668]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Nov 4 23:58:24.047776 update_engine[1608]: E20251104 23:58:24.047562 1608 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Nov 4 23:58:24.047776 update_engine[1608]: I20251104 23:58:24.047600 1608 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 4 23:58:24.047776 update_engine[1608]: I20251104 23:58:24.047607 1608 omaha_request_action.cc:617] Omaha request response: Nov 4 23:58:24.047776 update_engine[1608]: I20251104 23:58:24.047611 1608 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 4 23:58:24.047776 update_engine[1608]: I20251104 23:58:24.047614 1608 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 4 23:58:24.047776 update_engine[1608]: I20251104 23:58:24.047616 1608 update_attempter.cc:306] Processing Done. Nov 4 23:58:24.047776 update_engine[1608]: I20251104 23:58:24.047621 1608 update_attempter.cc:310] Error event sent. Nov 4 23:58:24.047776 update_engine[1608]: I20251104 23:58:24.047626 1608 update_check_scheduler.cc:74] Next update check in 47m27s Nov 4 23:58:24.047897 locksmithd[1668]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Nov 4 23:58:24.192105 sshd[5201]: Connection closed by 147.75.109.163 port 39786 Nov 4 23:58:24.192892 sshd-session[5198]: pam_unix(sshd:session): session closed for user core Nov 4 23:58:24.199549 systemd[1]: sshd@14-46.62.221.150:22-147.75.109.163:39786.service: Deactivated successfully. Nov 4 23:58:24.206976 systemd[1]: session-14.scope: Deactivated successfully. Nov 4 23:58:24.213375 systemd-logind[1604]: Session 14 logged out. Waiting for processes to exit. Nov 4 23:58:24.216593 systemd-logind[1604]: Removed session 14. Nov 4 23:58:24.403691 systemd[1]: Started sshd@15-46.62.221.150:22-147.75.109.163:39790.service - OpenSSH per-connection server daemon (147.75.109.163:39790). Nov 4 23:58:25.534302 sshd[5213]: Accepted publickey for core from 147.75.109.163 port 39790 ssh2: RSA SHA256:2nrKbouutsjVTOVoe49KHBpiak20yOpS7DxBbxhQEyE Nov 4 23:58:25.536078 sshd-session[5213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:58:25.546040 systemd-logind[1604]: New session 15 of user core. Nov 4 23:58:25.552929 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 4 23:58:26.734225 sshd[5216]: Connection closed by 147.75.109.163 port 39790 Nov 4 23:58:26.738063 sshd-session[5213]: pam_unix(sshd:session): session closed for user core Nov 4 23:58:26.744633 systemd-logind[1604]: Session 15 logged out. Waiting for processes to exit. Nov 4 23:58:26.745394 systemd[1]: sshd@15-46.62.221.150:22-147.75.109.163:39790.service: Deactivated successfully. Nov 4 23:58:26.747957 systemd[1]: session-15.scope: Deactivated successfully. Nov 4 23:58:26.750347 systemd-logind[1604]: Removed session 15. Nov 4 23:58:26.887664 systemd[1]: Started sshd@16-46.62.221.150:22-147.75.109.163:39792.service - OpenSSH per-connection server daemon (147.75.109.163:39792). Nov 4 23:58:27.927904 sshd[5227]: Accepted publickey for core from 147.75.109.163 port 39792 ssh2: RSA SHA256:2nrKbouutsjVTOVoe49KHBpiak20yOpS7DxBbxhQEyE Nov 4 23:58:27.931725 sshd-session[5227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:58:27.942560 systemd-logind[1604]: New session 16 of user core. Nov 4 23:58:27.949711 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 4 23:58:29.390454 sshd[5230]: Connection closed by 147.75.109.163 port 39792 Nov 4 23:58:29.392161 sshd-session[5227]: pam_unix(sshd:session): session closed for user core Nov 4 23:58:29.405399 systemd-logind[1604]: Session 16 logged out. Waiting for processes to exit. Nov 4 23:58:29.407050 systemd[1]: sshd@16-46.62.221.150:22-147.75.109.163:39792.service: Deactivated successfully. Nov 4 23:58:29.413599 systemd[1]: session-16.scope: Deactivated successfully. Nov 4 23:58:29.420470 systemd-logind[1604]: Removed session 16. Nov 4 23:58:29.607965 systemd[1]: Started sshd@17-46.62.221.150:22-147.75.109.163:39808.service - OpenSSH per-connection server daemon (147.75.109.163:39808). Nov 4 23:58:30.792955 sshd[5248]: Accepted publickey for core from 147.75.109.163 port 39808 ssh2: RSA SHA256:2nrKbouutsjVTOVoe49KHBpiak20yOpS7DxBbxhQEyE Nov 4 23:58:30.794850 sshd-session[5248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:58:30.803574 systemd-logind[1604]: New session 17 of user core. Nov 4 23:58:30.807780 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 4 23:58:30.975516 kubelet[2826]: E1104 23:58:30.975126 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4ms7h" podUID="380d0997-b155-4f76-994b-7e2911c8cbf8" Nov 4 23:58:31.948302 sshd[5251]: Connection closed by 147.75.109.163 port 39808 Nov 4 23:58:31.952736 sshd-session[5248]: pam_unix(sshd:session): session closed for user core Nov 4 23:58:31.961425 systemd[1]: sshd@17-46.62.221.150:22-147.75.109.163:39808.service: Deactivated successfully. Nov 4 23:58:31.965197 systemd[1]: session-17.scope: Deactivated successfully. Nov 4 23:58:31.971006 systemd-logind[1604]: Session 17 logged out. Waiting for processes to exit. Nov 4 23:58:31.973115 systemd-logind[1604]: Removed session 17. Nov 4 23:58:32.107699 systemd[1]: Started sshd@18-46.62.221.150:22-147.75.109.163:59230.service - OpenSSH per-connection server daemon (147.75.109.163:59230). Nov 4 23:58:33.155225 sshd[5261]: Accepted publickey for core from 147.75.109.163 port 59230 ssh2: RSA SHA256:2nrKbouutsjVTOVoe49KHBpiak20yOpS7DxBbxhQEyE Nov 4 23:58:33.156670 sshd-session[5261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:58:33.164651 systemd-logind[1604]: New session 18 of user core. Nov 4 23:58:33.169680 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 4 23:58:33.961829 sshd[5264]: Connection closed by 147.75.109.163 port 59230 Nov 4 23:58:33.962225 sshd-session[5261]: pam_unix(sshd:session): session closed for user core Nov 4 23:58:33.968721 kubelet[2826]: E1104 23:58:33.968021 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b747dc6fd-58ml9" podUID="af60ecf0-185c-46c3-9aed-e5c51cd74bb3" Nov 4 23:58:33.968721 kubelet[2826]: E1104 23:58:33.968363 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-677647d65b-vprpk" podUID="2f84ec82-93eb-45f7-9d50-0aaa6ffeaaaa" Nov 4 23:58:33.966583 systemd[1]: sshd@18-46.62.221.150:22-147.75.109.163:59230.service: Deactivated successfully. Nov 4 23:58:33.971186 systemd[1]: session-18.scope: Deactivated successfully. Nov 4 23:58:33.973818 systemd-logind[1604]: Session 18 logged out. Waiting for processes to exit. Nov 4 23:58:33.974862 systemd-logind[1604]: Removed session 18. Nov 4 23:58:34.973013 kubelet[2826]: E1104 23:58:34.972964 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-677647d65b-8x7v5" podUID="931996dd-fd1b-4af9-a724-280dd54dbe3b" Nov 4 23:58:36.342530 containerd[1627]: time="2025-11-04T23:58:36.342089023Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8ca26977d6a5c9ca0214b35c97a6596546938c85b1e66f8c91991295a17af949\" id:\"c161bc670e119cc87a47515be286a2db921d3732c6ae6bd1b5eaa4b7c364a0b0\" pid:5290 exited_at:{seconds:1762300716 nanos:341728875}" Nov 4 23:58:36.968342 kubelet[2826]: E1104 23:58:36.968253 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79bffcdf98-xwjrq" podUID="8bf957df-2cff-497d-912f-bd9de450b664" Nov 4 23:58:38.970801 kubelet[2826]: E1104 23:58:38.970319 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pxcr4" podUID="13dd94ef-4602-4fcf-b36d-5a6661064a5d" Nov 4 23:58:39.166422 systemd[1]: Started sshd@19-46.62.221.150:22-147.75.109.163:59242.service - OpenSSH per-connection server daemon (147.75.109.163:59242). Nov 4 23:58:40.339576 sshd[5307]: Accepted publickey for core from 147.75.109.163 port 59242 ssh2: RSA SHA256:2nrKbouutsjVTOVoe49KHBpiak20yOpS7DxBbxhQEyE Nov 4 23:58:40.341837 sshd-session[5307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 23:58:40.348595 systemd-logind[1604]: New session 19 of user core. Nov 4 23:58:40.356122 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 4 23:58:41.291052 sshd[5310]: Connection closed by 147.75.109.163 port 59242 Nov 4 23:58:41.291783 sshd-session[5307]: pam_unix(sshd:session): session closed for user core Nov 4 23:58:41.295795 systemd-logind[1604]: Session 19 logged out. Waiting for processes to exit. Nov 4 23:58:41.296318 systemd[1]: sshd@19-46.62.221.150:22-147.75.109.163:59242.service: Deactivated successfully. Nov 4 23:58:41.299191 systemd[1]: session-19.scope: Deactivated successfully. Nov 4 23:58:41.301738 systemd-logind[1604]: Removed session 19. Nov 4 23:58:44.967909 kubelet[2826]: E1104 23:58:44.967861 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4ms7h" podUID="380d0997-b155-4f76-994b-7e2911c8cbf8" Nov 4 23:58:45.965442 kubelet[2826]: E1104 23:58:45.965161 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-677647d65b-8x7v5" podUID="931996dd-fd1b-4af9-a724-280dd54dbe3b" Nov 4 23:58:47.967243 kubelet[2826]: E1104 23:58:47.967120 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-677647d65b-vprpk" podUID="2f84ec82-93eb-45f7-9d50-0aaa6ffeaaaa" Nov 4 23:58:47.968030 kubelet[2826]: E1104 23:58:47.967300 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79bffcdf98-xwjrq" podUID="8bf957df-2cff-497d-912f-bd9de450b664" Nov 4 23:58:48.965037 kubelet[2826]: E1104 23:58:48.964952 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b747dc6fd-58ml9" podUID="af60ecf0-185c-46c3-9aed-e5c51cd74bb3" Nov 4 23:58:50.966395 kubelet[2826]: E1104 23:58:50.966132 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pxcr4" podUID="13dd94ef-4602-4fcf-b36d-5a6661064a5d" Nov 4 23:58:56.741423 systemd[1]: cri-containerd-f4fd2d9c8c8fb80c492f3ddbf24f1d795357d25ebe64086c9a096dd89536df13.scope: Deactivated successfully. Nov 4 23:58:56.741880 systemd[1]: cri-containerd-f4fd2d9c8c8fb80c492f3ddbf24f1d795357d25ebe64086c9a096dd89536df13.scope: Consumed 3.491s CPU time, 92.1M memory peak, 66.4M read from disk. Nov 4 23:58:56.784497 containerd[1627]: time="2025-11-04T23:58:56.784066840Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f4fd2d9c8c8fb80c492f3ddbf24f1d795357d25ebe64086c9a096dd89536df13\" id:\"f4fd2d9c8c8fb80c492f3ddbf24f1d795357d25ebe64086c9a096dd89536df13\" pid:2654 exit_status:1 exited_at:{seconds:1762300736 nanos:783310360}" Nov 4 23:58:56.784497 containerd[1627]: time="2025-11-04T23:58:56.784273011Z" level=info msg="received exit event container_id:\"f4fd2d9c8c8fb80c492f3ddbf24f1d795357d25ebe64086c9a096dd89536df13\" id:\"f4fd2d9c8c8fb80c492f3ddbf24f1d795357d25ebe64086c9a096dd89536df13\" pid:2654 exit_status:1 exited_at:{seconds:1762300736 nanos:783310360}" Nov 4 23:58:56.948720 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4fd2d9c8c8fb80c492f3ddbf24f1d795357d25ebe64086c9a096dd89536df13-rootfs.mount: Deactivated successfully. Nov 4 23:58:56.978328 systemd[1]: cri-containerd-1aecbe773e27feb544ff19977b4afbd133bc25ef73d6244058b42f3b3fee2be3.scope: Deactivated successfully. Nov 4 23:58:56.978669 systemd[1]: cri-containerd-1aecbe773e27feb544ff19977b4afbd133bc25ef73d6244058b42f3b3fee2be3.scope: Consumed 1.744s CPU time, 40.1M memory peak, 36.8M read from disk. Nov 4 23:58:56.984349 containerd[1627]: time="2025-11-04T23:58:56.984313251Z" level=info msg="received exit event container_id:\"1aecbe773e27feb544ff19977b4afbd133bc25ef73d6244058b42f3b3fee2be3\" id:\"1aecbe773e27feb544ff19977b4afbd133bc25ef73d6244058b42f3b3fee2be3\" pid:2647 exit_status:1 exited_at:{seconds:1762300736 nanos:983850971}" Nov 4 23:58:57.024007 kubelet[2826]: E1104 23:58:57.023090 2826 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:56388->10.0.0.2:2379: read: connection timed out" Nov 4 23:58:57.025163 containerd[1627]: time="2025-11-04T23:58:57.024628128Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1aecbe773e27feb544ff19977b4afbd133bc25ef73d6244058b42f3b3fee2be3\" id:\"1aecbe773e27feb544ff19977b4afbd133bc25ef73d6244058b42f3b3fee2be3\" pid:2647 exit_status:1 exited_at:{seconds:1762300736 nanos:983850971}" Nov 4 23:58:57.067441 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1aecbe773e27feb544ff19977b4afbd133bc25ef73d6244058b42f3b3fee2be3-rootfs.mount: Deactivated successfully. Nov 4 23:58:57.569373 systemd[1]: cri-containerd-43ad66d553e039ca8dd9602018e25475176b79bfc8eab9e5cd7dac68a54ce85b.scope: Deactivated successfully. Nov 4 23:58:57.569956 systemd[1]: cri-containerd-43ad66d553e039ca8dd9602018e25475176b79bfc8eab9e5cd7dac68a54ce85b.scope: Consumed 21.697s CPU time, 129.9M memory peak, 43.7M read from disk. Nov 4 23:58:57.574821 containerd[1627]: time="2025-11-04T23:58:57.574774519Z" level=info msg="received exit event container_id:\"43ad66d553e039ca8dd9602018e25475176b79bfc8eab9e5cd7dac68a54ce85b\" id:\"43ad66d553e039ca8dd9602018e25475176b79bfc8eab9e5cd7dac68a54ce85b\" pid:3150 exit_status:1 exited_at:{seconds:1762300737 nanos:573358163}" Nov 4 23:58:57.575745 containerd[1627]: time="2025-11-04T23:58:57.575674542Z" level=info msg="TaskExit event in podsandbox handler container_id:\"43ad66d553e039ca8dd9602018e25475176b79bfc8eab9e5cd7dac68a54ce85b\" id:\"43ad66d553e039ca8dd9602018e25475176b79bfc8eab9e5cd7dac68a54ce85b\" pid:3150 exit_status:1 exited_at:{seconds:1762300737 nanos:573358163}" Nov 4 23:58:57.608724 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43ad66d553e039ca8dd9602018e25475176b79bfc8eab9e5cd7dac68a54ce85b-rootfs.mount: Deactivated successfully. Nov 4 23:58:57.678020 kubelet[2826]: I1104 23:58:57.677982 2826 scope.go:117] "RemoveContainer" containerID="1aecbe773e27feb544ff19977b4afbd133bc25ef73d6244058b42f3b3fee2be3" Nov 4 23:58:57.678373 kubelet[2826]: I1104 23:58:57.678268 2826 scope.go:117] "RemoveContainer" containerID="43ad66d553e039ca8dd9602018e25475176b79bfc8eab9e5cd7dac68a54ce85b" Nov 4 23:58:57.679352 kubelet[2826]: I1104 23:58:57.679253 2826 scope.go:117] "RemoveContainer" containerID="f4fd2d9c8c8fb80c492f3ddbf24f1d795357d25ebe64086c9a096dd89536df13" Nov 4 23:58:57.708614 containerd[1627]: time="2025-11-04T23:58:57.707338093Z" level=info msg="CreateContainer within sandbox \"9921c76d80e044faf0712579c398416e24450efa75eb860aa3804d3d2b349538\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 4 23:58:57.710382 containerd[1627]: time="2025-11-04T23:58:57.710346981Z" level=info msg="CreateContainer within sandbox \"194685c96b7086838326a1ac628375afc828c23054be04349bcf088c7bf8231b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 4 23:58:57.728354 containerd[1627]: time="2025-11-04T23:58:57.728173174Z" level=info msg="CreateContainer within sandbox \"291564ed6bd6f3b68f38fc8ad0fe04fe7e2f2fa61ac9c70a8ef167e9f5600f4b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 4 23:58:57.820981 containerd[1627]: time="2025-11-04T23:58:57.820725343Z" level=info msg="Container b9f04e9a7a0ab1e634b6ce98a3822873f9dd04ca68f65b825d21945203f2efec: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:58:57.828511 containerd[1627]: time="2025-11-04T23:58:57.828440309Z" level=info msg="Container 8eef69b14396ccd999ab72c3b57c9757783ba014485e72ff1c9c9dc5b0730856: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:58:57.829290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2774718655.mount: Deactivated successfully. Nov 4 23:58:57.846593 containerd[1627]: time="2025-11-04T23:58:57.844674010Z" level=info msg="Container ce18016779d364d3bf149f9e790bca6b5e2b8b1f86aa1de11d560ca6bd73ce5e: CDI devices from CRI Config.CDIDevices: []" Nov 4 23:58:57.877637 containerd[1627]: time="2025-11-04T23:58:57.877591963Z" level=info msg="CreateContainer within sandbox \"194685c96b7086838326a1ac628375afc828c23054be04349bcf088c7bf8231b\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"b9f04e9a7a0ab1e634b6ce98a3822873f9dd04ca68f65b825d21945203f2efec\"" Nov 4 23:58:57.885573 containerd[1627]: time="2025-11-04T23:58:57.885538589Z" level=info msg="StartContainer for \"b9f04e9a7a0ab1e634b6ce98a3822873f9dd04ca68f65b825d21945203f2efec\"" Nov 4 23:58:57.892445 containerd[1627]: time="2025-11-04T23:58:57.892380143Z" level=info msg="CreateContainer within sandbox \"9921c76d80e044faf0712579c398416e24450efa75eb860aa3804d3d2b349538\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"ce18016779d364d3bf149f9e790bca6b5e2b8b1f86aa1de11d560ca6bd73ce5e\"" Nov 4 23:58:57.894424 containerd[1627]: time="2025-11-04T23:58:57.894365272Z" level=info msg="CreateContainer within sandbox \"291564ed6bd6f3b68f38fc8ad0fe04fe7e2f2fa61ac9c70a8ef167e9f5600f4b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"8eef69b14396ccd999ab72c3b57c9757783ba014485e72ff1c9c9dc5b0730856\"" Nov 4 23:58:57.895201 containerd[1627]: time="2025-11-04T23:58:57.895160045Z" level=info msg="StartContainer for \"8eef69b14396ccd999ab72c3b57c9757783ba014485e72ff1c9c9dc5b0730856\"" Nov 4 23:58:57.895466 containerd[1627]: time="2025-11-04T23:58:57.895434367Z" level=info msg="StartContainer for \"ce18016779d364d3bf149f9e790bca6b5e2b8b1f86aa1de11d560ca6bd73ce5e\"" Nov 4 23:58:57.895836 containerd[1627]: time="2025-11-04T23:58:57.895787279Z" level=info msg="connecting to shim b9f04e9a7a0ab1e634b6ce98a3822873f9dd04ca68f65b825d21945203f2efec" address="unix:///run/containerd/s/1a39521528c4d869da6d4e0caa647cc814d0901a2afb8a5a9f354d644dbc99a1" protocol=ttrpc version=3 Nov 4 23:58:57.897542 containerd[1627]: time="2025-11-04T23:58:57.897374920Z" level=info msg="connecting to shim ce18016779d364d3bf149f9e790bca6b5e2b8b1f86aa1de11d560ca6bd73ce5e" address="unix:///run/containerd/s/dfc9669b67541338778ec3f891562ec51c73c7d6af89d2883117ed8b7775cc6d" protocol=ttrpc version=3 Nov 4 23:58:57.898159 containerd[1627]: time="2025-11-04T23:58:57.898109609Z" level=info msg="connecting to shim 8eef69b14396ccd999ab72c3b57c9757783ba014485e72ff1c9c9dc5b0730856" address="unix:///run/containerd/s/1b68a94eb4d4d62448a2d1cf151a305370ea78d61cd7ea1854a677288883a864" protocol=ttrpc version=3 Nov 4 23:58:57.940048 systemd[1]: Started cri-containerd-8eef69b14396ccd999ab72c3b57c9757783ba014485e72ff1c9c9dc5b0730856.scope - libcontainer container 8eef69b14396ccd999ab72c3b57c9757783ba014485e72ff1c9c9dc5b0730856. Nov 4 23:58:57.951076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2155923175.mount: Deactivated successfully. Nov 4 23:58:57.951268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2471666245.mount: Deactivated successfully. Nov 4 23:58:57.974020 systemd[1]: Started cri-containerd-b9f04e9a7a0ab1e634b6ce98a3822873f9dd04ca68f65b825d21945203f2efec.scope - libcontainer container b9f04e9a7a0ab1e634b6ce98a3822873f9dd04ca68f65b825d21945203f2efec. Nov 4 23:58:57.975417 systemd[1]: Started cri-containerd-ce18016779d364d3bf149f9e790bca6b5e2b8b1f86aa1de11d560ca6bd73ce5e.scope - libcontainer container ce18016779d364d3bf149f9e790bca6b5e2b8b1f86aa1de11d560ca6bd73ce5e. Nov 4 23:58:58.036818 containerd[1627]: time="2025-11-04T23:58:58.036696560Z" level=info msg="StartContainer for \"8eef69b14396ccd999ab72c3b57c9757783ba014485e72ff1c9c9dc5b0730856\" returns successfully" Nov 4 23:58:58.048716 kubelet[2826]: I1104 23:58:58.048679 2826 status_manager.go:895] "Failed to get status for pod" podUID="2f84ec82-93eb-45f7-9d50-0aaa6ffeaaaa" pod="calico-apiserver/calico-apiserver-677647d65b-vprpk" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:56324->10.0.0.2:2379: read: connection timed out" Nov 4 23:58:58.065263 containerd[1627]: time="2025-11-04T23:58:58.065117865Z" level=info msg="StartContainer for \"b9f04e9a7a0ab1e634b6ce98a3822873f9dd04ca68f65b825d21945203f2efec\" returns successfully" Nov 4 23:58:58.078010 containerd[1627]: time="2025-11-04T23:58:58.077657071Z" level=info msg="StartContainer for \"ce18016779d364d3bf149f9e790bca6b5e2b8b1f86aa1de11d560ca6bd73ce5e\" returns successfully" Nov 4 23:58:58.087351 kubelet[2826]: E1104 23:58:58.055226 2826 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:56206->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{whisker-79bffcdf98-xwjrq.1874f30e944c9621 calico-system 1623 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:whisker-79bffcdf98-xwjrq,UID:8bf957df-2cff-497d-912f-bd9de450b664,APIVersion:v1,ResourceVersion:906,FieldPath:spec.containers{whisker},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/whisker:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4487-0-0-n-1c2c5ddea4,},FirstTimestamp:2025-11-04 23:56:38 +0000 UTC,LastTimestamp:2025-11-04 23:58:47.965971375 +0000 UTC m=+167.116710529,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4487-0-0-n-1c2c5ddea4,}" Nov 4 23:58:58.966978 kubelet[2826]: E1104 23:58:58.966932 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-677647d65b-8x7v5" podUID="931996dd-fd1b-4af9-a724-280dd54dbe3b" Nov 4 23:58:58.970335 kubelet[2826]: E1104 23:58:58.970297 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4ms7h" podUID="380d0997-b155-4f76-994b-7e2911c8cbf8" Nov 4 23:58:59.965709 kubelet[2826]: E1104 23:58:59.965620 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-677647d65b-vprpk" podUID="2f84ec82-93eb-45f7-9d50-0aaa6ffeaaaa" Nov 4 23:59:01.965301 kubelet[2826]: E1104 23:59:01.965227 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7b747dc6fd-58ml9" podUID="af60ecf0-185c-46c3-9aed-e5c51cd74bb3" Nov 4 23:59:01.965862 kubelet[2826]: E1104 23:59:01.965751 2826 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79bffcdf98-xwjrq" podUID="8bf957df-2cff-497d-912f-bd9de450b664"