Oct 27 08:22:37.300221 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Mon Oct 27 06:24:35 -00 2025 Oct 27 08:22:37.300242 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=e6ac205aca0358d0b739fe2cba6f8244850dbdc9027fd8e7442161fce065515e Oct 27 08:22:37.300252 kernel: BIOS-provided physical RAM map: Oct 27 08:22:37.300257 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 27 08:22:37.300262 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 27 08:22:37.300267 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 27 08:22:37.300273 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Oct 27 08:22:37.300279 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Oct 27 08:22:37.300292 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 27 08:22:37.300302 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 27 08:22:37.300311 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 27 08:22:37.300321 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 27 08:22:37.300328 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 27 08:22:37.300333 kernel: NX (Execute Disable) protection: active Oct 27 08:22:37.300341 kernel: APIC: Static calls initialized Oct 27 08:22:37.300347 kernel: SMBIOS 3.0.0 present. Oct 27 08:22:37.300353 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Oct 27 08:22:37.300358 kernel: DMI: Memory slots populated: 1/1 Oct 27 08:22:37.300364 kernel: Hypervisor detected: KVM Oct 27 08:22:37.300369 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Oct 27 08:22:37.300375 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 27 08:22:37.300380 kernel: kvm-clock: using sched offset of 3655098354 cycles Oct 27 08:22:37.300386 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 27 08:22:37.300393 kernel: tsc: Detected 2445.406 MHz processor Oct 27 08:22:37.300399 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 27 08:22:37.300405 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 27 08:22:37.300411 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Oct 27 08:22:37.300417 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 27 08:22:37.300423 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 27 08:22:37.300428 kernel: Using GB pages for direct mapping Oct 27 08:22:37.300435 kernel: ACPI: Early table checksum verification disabled Oct 27 08:22:37.300441 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Oct 27 08:22:37.300447 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:22:37.300453 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:22:37.300458 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:22:37.300464 kernel: ACPI: FACS 0x000000007CFE0000 000040 Oct 27 08:22:37.300470 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:22:37.300477 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:22:37.300482 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:22:37.300488 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 08:22:37.300496 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] Oct 27 08:22:37.300502 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] Oct 27 08:22:37.300508 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Oct 27 08:22:37.300515 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] Oct 27 08:22:37.300521 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] Oct 27 08:22:37.300527 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] Oct 27 08:22:37.300533 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] Oct 27 08:22:37.300538 kernel: No NUMA configuration found Oct 27 08:22:37.300544 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Oct 27 08:22:37.300551 kernel: NODE_DATA(0) allocated [mem 0x7cfd4dc0-0x7cfdbfff] Oct 27 08:22:37.300558 kernel: Zone ranges: Oct 27 08:22:37.300564 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 27 08:22:37.300570 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Oct 27 08:22:37.300575 kernel: Normal empty Oct 27 08:22:37.300581 kernel: Device empty Oct 27 08:22:37.300587 kernel: Movable zone start for each node Oct 27 08:22:37.300595 kernel: Early memory node ranges Oct 27 08:22:37.300600 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 27 08:22:37.300606 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Oct 27 08:22:37.300612 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Oct 27 08:22:37.300621 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 27 08:22:37.300632 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 27 08:22:37.300644 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Oct 27 08:22:37.300654 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 27 08:22:37.300667 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 27 08:22:37.300677 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 27 08:22:37.300685 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 27 08:22:37.300691 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 27 08:22:37.300697 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 27 08:22:37.300703 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 27 08:22:37.300709 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 27 08:22:37.300716 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 27 08:22:37.300722 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 27 08:22:37.300728 kernel: CPU topo: Max. logical packages: 1 Oct 27 08:22:37.300734 kernel: CPU topo: Max. logical dies: 1 Oct 27 08:22:37.300739 kernel: CPU topo: Max. dies per package: 1 Oct 27 08:22:37.300745 kernel: CPU topo: Max. threads per core: 1 Oct 27 08:22:37.300751 kernel: CPU topo: Num. cores per package: 2 Oct 27 08:22:37.300757 kernel: CPU topo: Num. threads per package: 2 Oct 27 08:22:37.300764 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Oct 27 08:22:37.300770 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 27 08:22:37.300776 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 27 08:22:37.300782 kernel: Booting paravirtualized kernel on KVM Oct 27 08:22:37.300788 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 27 08:22:37.300794 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 27 08:22:37.300800 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Oct 27 08:22:37.300807 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Oct 27 08:22:37.300813 kernel: pcpu-alloc: [0] 0 1 Oct 27 08:22:37.300819 kernel: kvm-guest: PV spinlocks disabled, no host support Oct 27 08:22:37.300826 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=e6ac205aca0358d0b739fe2cba6f8244850dbdc9027fd8e7442161fce065515e Oct 27 08:22:37.300832 kernel: random: crng init done Oct 27 08:22:37.300850 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 27 08:22:37.300865 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 27 08:22:37.300879 kernel: Fallback order for Node 0: 0 Oct 27 08:22:37.300889 kernel: Built 1 zonelists, mobility grouping on. Total pages: 511866 Oct 27 08:22:37.300895 kernel: Policy zone: DMA32 Oct 27 08:22:37.301000 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 27 08:22:37.301008 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 27 08:22:37.301014 kernel: ftrace: allocating 40092 entries in 157 pages Oct 27 08:22:37.301020 kernel: ftrace: allocated 157 pages with 5 groups Oct 27 08:22:37.301029 kernel: Dynamic Preempt: voluntary Oct 27 08:22:37.301035 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 27 08:22:37.301042 kernel: rcu: RCU event tracing is enabled. Oct 27 08:22:37.301048 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 27 08:22:37.301054 kernel: Trampoline variant of Tasks RCU enabled. Oct 27 08:22:37.301060 kernel: Rude variant of Tasks RCU enabled. Oct 27 08:22:37.301066 kernel: Tracing variant of Tasks RCU enabled. Oct 27 08:22:37.301072 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 27 08:22:37.301079 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 27 08:22:37.301085 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 27 08:22:37.301091 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 27 08:22:37.301097 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Oct 27 08:22:37.301116 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 27 08:22:37.301122 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 27 08:22:37.301128 kernel: Console: colour VGA+ 80x25 Oct 27 08:22:37.301135 kernel: printk: legacy console [tty0] enabled Oct 27 08:22:37.301141 kernel: printk: legacy console [ttyS0] enabled Oct 27 08:22:37.301147 kernel: ACPI: Core revision 20240827 Oct 27 08:22:37.301157 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 27 08:22:37.301165 kernel: APIC: Switch to symmetric I/O mode setup Oct 27 08:22:37.301171 kernel: x2apic enabled Oct 27 08:22:37.301177 kernel: APIC: Switched APIC routing to: physical x2apic Oct 27 08:22:37.301184 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 27 08:22:37.301190 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fc4eb620, max_idle_ns: 440795316590 ns Oct 27 08:22:37.301197 kernel: Calibrating delay loop (skipped) preset value.. 4890.81 BogoMIPS (lpj=2445406) Oct 27 08:22:37.301204 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 27 08:22:37.301210 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 27 08:22:37.301217 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 27 08:22:37.301224 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 27 08:22:37.301230 kernel: Spectre V2 : Mitigation: Retpolines Oct 27 08:22:37.301237 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 27 08:22:37.301243 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 27 08:22:37.301249 kernel: active return thunk: retbleed_return_thunk Oct 27 08:22:37.301255 kernel: RETBleed: Mitigation: untrained return thunk Oct 27 08:22:37.301262 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 27 08:22:37.301269 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 27 08:22:37.301276 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 27 08:22:37.301283 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 27 08:22:37.301289 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 27 08:22:37.301295 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 27 08:22:37.301302 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 27 08:22:37.301308 kernel: Freeing SMP alternatives memory: 32K Oct 27 08:22:37.301315 kernel: pid_max: default: 32768 minimum: 301 Oct 27 08:22:37.301322 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 27 08:22:37.301328 kernel: landlock: Up and running. Oct 27 08:22:37.301334 kernel: SELinux: Initializing. Oct 27 08:22:37.301340 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 27 08:22:37.301347 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 27 08:22:37.301353 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 27 08:22:37.301360 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 27 08:22:37.301366 kernel: ... version: 0 Oct 27 08:22:37.301373 kernel: ... bit width: 48 Oct 27 08:22:37.301379 kernel: ... generic registers: 6 Oct 27 08:22:37.301385 kernel: ... value mask: 0000ffffffffffff Oct 27 08:22:37.301391 kernel: ... max period: 00007fffffffffff Oct 27 08:22:37.301397 kernel: ... fixed-purpose events: 0 Oct 27 08:22:37.301404 kernel: ... event mask: 000000000000003f Oct 27 08:22:37.301411 kernel: signal: max sigframe size: 1776 Oct 27 08:22:37.301417 kernel: rcu: Hierarchical SRCU implementation. Oct 27 08:22:37.301424 kernel: rcu: Max phase no-delay instances is 400. Oct 27 08:22:37.301430 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 27 08:22:37.301436 kernel: smp: Bringing up secondary CPUs ... Oct 27 08:22:37.301442 kernel: smpboot: x86: Booting SMP configuration: Oct 27 08:22:37.301448 kernel: .... node #0, CPUs: #1 Oct 27 08:22:37.301456 kernel: smp: Brought up 1 node, 2 CPUs Oct 27 08:22:37.301462 kernel: smpboot: Total of 2 processors activated (9781.62 BogoMIPS) Oct 27 08:22:37.301469 kernel: Memory: 1940308K/2047464K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15964K init, 2080K bss, 102612K reserved, 0K cma-reserved) Oct 27 08:22:37.301475 kernel: devtmpfs: initialized Oct 27 08:22:37.301481 kernel: x86/mm: Memory block size: 128MB Oct 27 08:22:37.301488 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 27 08:22:37.301494 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 27 08:22:37.301501 kernel: pinctrl core: initialized pinctrl subsystem Oct 27 08:22:37.301508 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 27 08:22:37.301514 kernel: audit: initializing netlink subsys (disabled) Oct 27 08:22:37.301520 kernel: audit: type=2000 audit(1761553355.309:1): state=initialized audit_enabled=0 res=1 Oct 27 08:22:37.301526 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 27 08:22:37.301537 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 27 08:22:37.301549 kernel: cpuidle: using governor menu Oct 27 08:22:37.301561 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 27 08:22:37.301567 kernel: dca service started, version 1.12.1 Oct 27 08:22:37.301574 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Oct 27 08:22:37.301580 kernel: PCI: Using configuration type 1 for base access Oct 27 08:22:37.301586 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 27 08:22:37.301593 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 27 08:22:37.301599 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 27 08:22:37.301607 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 27 08:22:37.301613 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 27 08:22:37.301619 kernel: ACPI: Added _OSI(Module Device) Oct 27 08:22:37.301626 kernel: ACPI: Added _OSI(Processor Device) Oct 27 08:22:37.301632 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 27 08:22:37.301638 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 27 08:22:37.301644 kernel: ACPI: Interpreter enabled Oct 27 08:22:37.301652 kernel: ACPI: PM: (supports S0 S5) Oct 27 08:22:37.301658 kernel: ACPI: Using IOAPIC for interrupt routing Oct 27 08:22:37.301664 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 27 08:22:37.301671 kernel: PCI: Using E820 reservations for host bridge windows Oct 27 08:22:37.301677 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 27 08:22:37.301683 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 27 08:22:37.301825 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 27 08:22:37.301933 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 27 08:22:37.302047 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 27 08:22:37.302063 kernel: PCI host bridge to bus 0000:00 Oct 27 08:22:37.302181 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 27 08:22:37.302257 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 27 08:22:37.302370 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 27 08:22:37.302445 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Oct 27 08:22:37.302515 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 27 08:22:37.302585 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Oct 27 08:22:37.302654 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 27 08:22:37.302750 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 27 08:22:37.302843 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Oct 27 08:22:37.306365 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfb800000-0xfbffffff pref] Oct 27 08:22:37.306695 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfd200000-0xfd203fff 64bit pref] Oct 27 08:22:37.306877 kernel: pci 0000:00:01.0: BAR 4 [mem 0xfea10000-0xfea10fff] Oct 27 08:22:37.307015 kernel: pci 0000:00:01.0: ROM [mem 0xfea00000-0xfea0ffff pref] Oct 27 08:22:37.307122 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 27 08:22:37.307216 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 27 08:22:37.307297 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea11000-0xfea11fff] Oct 27 08:22:37.307377 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Oct 27 08:22:37.307485 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Oct 27 08:22:37.307570 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Oct 27 08:22:37.307660 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 27 08:22:37.307739 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea12000-0xfea12fff] Oct 27 08:22:37.307845 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Oct 27 08:22:37.308019 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Oct 27 08:22:37.308120 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Oct 27 08:22:37.308209 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 27 08:22:37.308294 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea13000-0xfea13fff] Oct 27 08:22:37.308373 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Oct 27 08:22:37.308450 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Oct 27 08:22:37.308527 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 27 08:22:37.309672 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 27 08:22:37.309787 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea14000-0xfea14fff] Oct 27 08:22:37.309885 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Oct 27 08:22:37.309987 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Oct 27 08:22:37.310068 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 27 08:22:37.310174 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 27 08:22:37.310256 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea15000-0xfea15fff] Oct 27 08:22:37.310341 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Oct 27 08:22:37.310421 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Oct 27 08:22:37.310498 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 27 08:22:37.310582 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 27 08:22:37.310661 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea16000-0xfea16fff] Oct 27 08:22:37.310737 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Oct 27 08:22:37.310818 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Oct 27 08:22:37.310897 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 27 08:22:37.312142 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 27 08:22:37.312263 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea17000-0xfea17fff] Oct 27 08:22:37.312349 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Oct 27 08:22:37.312431 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Oct 27 08:22:37.312521 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 27 08:22:37.312609 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 27 08:22:37.312691 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea18000-0xfea18fff] Oct 27 08:22:37.312771 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Oct 27 08:22:37.312855 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Oct 27 08:22:37.312963 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 27 08:22:37.313053 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 27 08:22:37.313148 kernel: pci 0000:00:03.0: BAR 0 [mem 0xfea19000-0xfea19fff] Oct 27 08:22:37.313229 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Oct 27 08:22:37.313306 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Oct 27 08:22:37.313384 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 27 08:22:37.313474 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 27 08:22:37.313553 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 27 08:22:37.313640 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 27 08:22:37.313719 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc040-0xc05f] Oct 27 08:22:37.313796 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea1a000-0xfea1afff] Oct 27 08:22:37.313880 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 27 08:22:37.314586 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Oct 27 08:22:37.314688 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Oct 27 08:22:37.314774 kernel: pci 0000:01:00.0: BAR 1 [mem 0xfe880000-0xfe880fff] Oct 27 08:22:37.314856 kernel: pci 0000:01:00.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Oct 27 08:22:37.314955 kernel: pci 0000:01:00.0: ROM [mem 0xfe800000-0xfe87ffff pref] Oct 27 08:22:37.315047 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Oct 27 08:22:37.315152 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Oct 27 08:22:37.315239 kernel: pci 0000:02:00.0: BAR 0 [mem 0xfe600000-0xfe603fff 64bit] Oct 27 08:22:37.315322 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Oct 27 08:22:37.315413 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Oct 27 08:22:37.315496 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfe400000-0xfe400fff] Oct 27 08:22:37.315582 kernel: pci 0000:03:00.0: BAR 4 [mem 0xfcc00000-0xfcc03fff 64bit pref] Oct 27 08:22:37.315664 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Oct 27 08:22:37.316387 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Oct 27 08:22:37.316489 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Oct 27 08:22:37.316576 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Oct 27 08:22:37.316679 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Oct 27 08:22:37.316796 kernel: pci 0000:05:00.0: BAR 1 [mem 0xfe000000-0xfe000fff] Oct 27 08:22:37.317069 kernel: pci 0000:05:00.0: BAR 4 [mem 0xfc800000-0xfc803fff 64bit pref] Oct 27 08:22:37.317173 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Oct 27 08:22:37.317266 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Oct 27 08:22:37.317350 kernel: pci 0000:06:00.0: BAR 1 [mem 0xfde00000-0xfde00fff] Oct 27 08:22:37.317438 kernel: pci 0000:06:00.0: BAR 4 [mem 0xfc600000-0xfc603fff 64bit pref] Oct 27 08:22:37.317521 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Oct 27 08:22:37.317532 kernel: acpiphp: Slot [0] registered Oct 27 08:22:37.317617 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Oct 27 08:22:37.317699 kernel: pci 0000:07:00.0: BAR 1 [mem 0xfdc80000-0xfdc80fff] Oct 27 08:22:37.317781 kernel: pci 0000:07:00.0: BAR 4 [mem 0xfc400000-0xfc403fff 64bit pref] Oct 27 08:22:37.317871 kernel: pci 0000:07:00.0: ROM [mem 0xfdc00000-0xfdc7ffff pref] Oct 27 08:22:37.318016 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Oct 27 08:22:37.318028 kernel: acpiphp: Slot [0-2] registered Oct 27 08:22:37.318157 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Oct 27 08:22:37.318169 kernel: acpiphp: Slot [0-3] registered Oct 27 08:22:37.318251 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Oct 27 08:22:37.318265 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 27 08:22:37.318272 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 27 08:22:37.318279 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 27 08:22:37.318286 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 27 08:22:37.318292 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 27 08:22:37.318299 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 27 08:22:37.318306 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 27 08:22:37.318315 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 27 08:22:37.318321 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 27 08:22:37.318328 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 27 08:22:37.318335 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 27 08:22:37.318341 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 27 08:22:37.318348 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 27 08:22:37.318355 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 27 08:22:37.318363 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 27 08:22:37.318370 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 27 08:22:37.318376 kernel: iommu: Default domain type: Translated Oct 27 08:22:37.318383 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 27 08:22:37.318390 kernel: PCI: Using ACPI for IRQ routing Oct 27 08:22:37.318397 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 27 08:22:37.318404 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 27 08:22:37.318412 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Oct 27 08:22:37.318497 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 27 08:22:37.318578 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 27 08:22:37.318657 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 27 08:22:37.318666 kernel: vgaarb: loaded Oct 27 08:22:37.318673 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 27 08:22:37.318681 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 27 08:22:37.318690 kernel: clocksource: Switched to clocksource kvm-clock Oct 27 08:22:37.318697 kernel: VFS: Disk quotas dquot_6.6.0 Oct 27 08:22:37.318704 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 27 08:22:37.318711 kernel: pnp: PnP ACPI init Oct 27 08:22:37.318804 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 27 08:22:37.318815 kernel: pnp: PnP ACPI: found 5 devices Oct 27 08:22:37.318822 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 27 08:22:37.318831 kernel: NET: Registered PF_INET protocol family Oct 27 08:22:37.318838 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 27 08:22:37.318844 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 27 08:22:37.318851 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 27 08:22:37.318858 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 27 08:22:37.318864 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 27 08:22:37.318871 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 27 08:22:37.318879 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 27 08:22:37.318886 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 27 08:22:37.318893 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 27 08:22:37.318900 kernel: NET: Registered PF_XDP protocol family Oct 27 08:22:37.320061 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 27 08:22:37.320205 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 27 08:22:37.320295 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 27 08:22:37.320384 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff]: assigned Oct 27 08:22:37.320464 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff]: assigned Oct 27 08:22:37.320555 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff]: assigned Oct 27 08:22:37.320638 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Oct 27 08:22:37.320717 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Oct 27 08:22:37.320794 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Oct 27 08:22:37.320877 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Oct 27 08:22:37.320977 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Oct 27 08:22:37.321060 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Oct 27 08:22:37.321160 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Oct 27 08:22:37.321242 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Oct 27 08:22:37.321320 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 27 08:22:37.321399 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Oct 27 08:22:37.321478 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Oct 27 08:22:37.321582 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 27 08:22:37.321663 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Oct 27 08:22:37.321745 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Oct 27 08:22:37.321823 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 27 08:22:37.321901 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Oct 27 08:22:37.322012 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Oct 27 08:22:37.322092 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 27 08:22:37.322190 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Oct 27 08:22:37.322269 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Oct 27 08:22:37.322347 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Oct 27 08:22:37.322429 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 27 08:22:37.322508 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Oct 27 08:22:37.322586 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Oct 27 08:22:37.322663 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Oct 27 08:22:37.322742 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 27 08:22:37.322819 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Oct 27 08:22:37.322901 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Oct 27 08:22:37.323357 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Oct 27 08:22:37.323444 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 27 08:22:37.323521 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 27 08:22:37.323594 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 27 08:22:37.323666 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 27 08:22:37.323738 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Oct 27 08:22:37.323813 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 27 08:22:37.323883 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Oct 27 08:22:37.324004 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Oct 27 08:22:37.324083 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Oct 27 08:22:37.324200 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Oct 27 08:22:37.324280 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Oct 27 08:22:37.324367 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Oct 27 08:22:37.325024 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 27 08:22:37.325136 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Oct 27 08:22:37.325275 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 27 08:22:37.325412 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Oct 27 08:22:37.325517 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 27 08:22:37.325600 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Oct 27 08:22:37.325674 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 27 08:22:37.325752 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Oct 27 08:22:37.325825 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Oct 27 08:22:37.325900 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 27 08:22:37.327033 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Oct 27 08:22:37.327132 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Oct 27 08:22:37.327210 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 27 08:22:37.327289 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Oct 27 08:22:37.327365 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Oct 27 08:22:37.327442 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 27 08:22:37.327453 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 27 08:22:37.327462 kernel: PCI: CLS 0 bytes, default 64 Oct 27 08:22:37.327469 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fc4eb620, max_idle_ns: 440795316590 ns Oct 27 08:22:37.327476 kernel: Initialise system trusted keyrings Oct 27 08:22:37.327484 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 27 08:22:37.327493 kernel: Key type asymmetric registered Oct 27 08:22:37.327500 kernel: Asymmetric key parser 'x509' registered Oct 27 08:22:37.327507 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 27 08:22:37.327514 kernel: io scheduler mq-deadline registered Oct 27 08:22:37.327521 kernel: io scheduler kyber registered Oct 27 08:22:37.327528 kernel: io scheduler bfq registered Oct 27 08:22:37.327610 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Oct 27 08:22:37.327692 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Oct 27 08:22:37.327775 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Oct 27 08:22:37.327853 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Oct 27 08:22:37.328815 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Oct 27 08:22:37.329939 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Oct 27 08:22:37.330058 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Oct 27 08:22:37.330162 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Oct 27 08:22:37.330250 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Oct 27 08:22:37.330330 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Oct 27 08:22:37.330412 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Oct 27 08:22:37.330491 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Oct 27 08:22:37.330572 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Oct 27 08:22:37.330649 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Oct 27 08:22:37.330731 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Oct 27 08:22:37.330809 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Oct 27 08:22:37.330819 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 27 08:22:37.330896 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Oct 27 08:22:37.331060 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Oct 27 08:22:37.331075 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 27 08:22:37.331086 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Oct 27 08:22:37.331094 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 27 08:22:37.331112 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 27 08:22:37.331119 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 27 08:22:37.331126 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 27 08:22:37.331134 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 27 08:22:37.331230 kernel: rtc_cmos 00:03: RTC can wake from S4 Oct 27 08:22:37.331307 kernel: rtc_cmos 00:03: registered as rtc0 Oct 27 08:22:37.331380 kernel: rtc_cmos 00:03: setting system clock to 2025-10-27T08:22:35 UTC (1761553355) Oct 27 08:22:37.331453 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 27 08:22:37.331463 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Oct 27 08:22:37.331470 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 27 08:22:37.331479 kernel: NET: Registered PF_INET6 protocol family Oct 27 08:22:37.331487 kernel: Segment Routing with IPv6 Oct 27 08:22:37.331494 kernel: In-situ OAM (IOAM) with IPv6 Oct 27 08:22:37.331501 kernel: NET: Registered PF_PACKET protocol family Oct 27 08:22:37.331507 kernel: Key type dns_resolver registered Oct 27 08:22:37.331514 kernel: IPI shorthand broadcast: enabled Oct 27 08:22:37.331522 kernel: sched_clock: Marking stable (1451012520, 145527124)->(1600970863, -4431219) Oct 27 08:22:37.331530 kernel: registered taskstats version 1 Oct 27 08:22:37.331537 kernel: Loading compiled-in X.509 certificates Oct 27 08:22:37.331544 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 6c7ef547b8d769f7afd2708799fb9c3145695bfb' Oct 27 08:22:37.331551 kernel: Demotion targets for Node 0: null Oct 27 08:22:37.331558 kernel: Key type .fscrypt registered Oct 27 08:22:37.331564 kernel: Key type fscrypt-provisioning registered Oct 27 08:22:37.331571 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 27 08:22:37.331578 kernel: ima: Allocated hash algorithm: sha1 Oct 27 08:22:37.331586 kernel: ima: No architecture policies found Oct 27 08:22:37.331593 kernel: clk: Disabling unused clocks Oct 27 08:22:37.331599 kernel: Freeing unused kernel image (initmem) memory: 15964K Oct 27 08:22:37.331606 kernel: Write protecting the kernel read-only data: 40960k Oct 27 08:22:37.331613 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Oct 27 08:22:37.331620 kernel: Run /init as init process Oct 27 08:22:37.331629 kernel: with arguments: Oct 27 08:22:37.331644 kernel: /init Oct 27 08:22:37.331653 kernel: with environment: Oct 27 08:22:37.331660 kernel: HOME=/ Oct 27 08:22:37.331667 kernel: TERM=linux Oct 27 08:22:37.331674 kernel: ACPI: bus type USB registered Oct 27 08:22:37.331681 kernel: usbcore: registered new interface driver usbfs Oct 27 08:22:37.331688 kernel: usbcore: registered new interface driver hub Oct 27 08:22:37.331696 kernel: usbcore: registered new device driver usb Oct 27 08:22:37.331790 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Oct 27 08:22:37.331875 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Oct 27 08:22:37.331987 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Oct 27 08:22:37.332070 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Oct 27 08:22:37.332168 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Oct 27 08:22:37.332249 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Oct 27 08:22:37.332358 kernel: hub 1-0:1.0: USB hub found Oct 27 08:22:37.332446 kernel: hub 1-0:1.0: 4 ports detected Oct 27 08:22:37.332543 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Oct 27 08:22:37.332645 kernel: hub 2-0:1.0: USB hub found Oct 27 08:22:37.332734 kernel: hub 2-0:1.0: 4 ports detected Oct 27 08:22:37.332746 kernel: SCSI subsystem initialized Oct 27 08:22:37.332757 kernel: libata version 3.00 loaded. Oct 27 08:22:37.332858 kernel: ahci 0000:00:1f.2: version 3.0 Oct 27 08:22:37.332870 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 27 08:22:37.332972 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 27 08:22:37.333052 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 27 08:22:37.333155 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 27 08:22:37.333249 kernel: scsi host0: ahci Oct 27 08:22:37.333335 kernel: scsi host1: ahci Oct 27 08:22:37.333422 kernel: scsi host2: ahci Oct 27 08:22:37.333506 kernel: scsi host3: ahci Oct 27 08:22:37.333594 kernel: scsi host4: ahci Oct 27 08:22:37.333679 kernel: scsi host5: ahci Oct 27 08:22:37.333689 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 38 lpm-pol 1 Oct 27 08:22:37.333697 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 38 lpm-pol 1 Oct 27 08:22:37.333704 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 38 lpm-pol 1 Oct 27 08:22:37.333711 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 38 lpm-pol 1 Oct 27 08:22:37.333721 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 38 lpm-pol 1 Oct 27 08:22:37.333729 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 38 lpm-pol 1 Oct 27 08:22:37.333830 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Oct 27 08:22:37.333842 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 27 08:22:37.333849 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 27 08:22:37.333856 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 27 08:22:37.333865 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 27 08:22:37.333872 kernel: ata3: SATA link down (SStatus 0 SControl 300) Oct 27 08:22:37.333879 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 27 08:22:37.333885 kernel: ata1.00: LPM support broken, forcing max_power Oct 27 08:22:37.333892 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 27 08:22:37.333899 kernel: ata1.00: applying bridge limits Oct 27 08:22:37.333906 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 27 08:22:37.333937 kernel: ata1.00: LPM support broken, forcing max_power Oct 27 08:22:37.333947 kernel: ata1.00: configured for UDMA/100 Oct 27 08:22:37.334051 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 27 08:22:37.334062 kernel: usbcore: registered new interface driver usbhid Oct 27 08:22:37.334070 kernel: usbhid: USB HID core driver Oct 27 08:22:37.334178 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Oct 27 08:22:37.334270 kernel: scsi host6: Virtio SCSI HBA Oct 27 08:22:37.334390 kernel: scsi 6:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Oct 27 08:22:37.334479 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 27 08:22:37.334489 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 27 08:22:37.334496 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:22:37.334578 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Oct 27 08:22:37.334588 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Oct 27 08:22:37.334697 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Oct 27 08:22:37.334787 kernel: sd 6:0:0:0: Power-on or device reset occurred Oct 27 08:22:37.334874 kernel: sd 6:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Oct 27 08:22:37.334983 kernel: sd 6:0:0:0: [sda] Write Protect is off Oct 27 08:22:37.335072 kernel: sd 6:0:0:0: [sda] Mode Sense: 63 00 00 08 Oct 27 08:22:37.335181 kernel: sd 6:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Oct 27 08:22:37.335192 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 27 08:22:37.335200 kernel: GPT:25804799 != 80003071 Oct 27 08:22:37.335207 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 27 08:22:37.335214 kernel: GPT:25804799 != 80003071 Oct 27 08:22:37.335220 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 27 08:22:37.335227 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 27 08:22:37.335316 kernel: sd 6:0:0:0: [sda] Attached SCSI disk Oct 27 08:22:37.335326 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 27 08:22:37.335333 kernel: device-mapper: uevent: version 1.0.3 Oct 27 08:22:37.335340 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 27 08:22:37.335347 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 27 08:22:37.335354 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:22:37.335361 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:22:37.335370 kernel: raid6: avx2x4 gen() 16120 MB/s Oct 27 08:22:37.335377 kernel: raid6: avx2x2 gen() 16237 MB/s Oct 27 08:22:37.335384 kernel: raid6: avx2x1 gen() 15943 MB/s Oct 27 08:22:37.335391 kernel: raid6: using algorithm avx2x2 gen() 16237 MB/s Oct 27 08:22:37.335398 kernel: raid6: .... xor() 31317 MB/s, rmw enabled Oct 27 08:22:37.335405 kernel: raid6: using avx2x2 recovery algorithm Oct 27 08:22:37.335412 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:22:37.335420 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:22:37.335426 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:22:37.335433 kernel: xor: automatically using best checksumming function avx Oct 27 08:22:37.335440 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:22:37.335447 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 27 08:22:37.335454 kernel: BTRFS: device fsid bf514789-bcec-4c15-ac9d-e4c3d19a42b2 devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (180) Oct 27 08:22:37.335461 kernel: BTRFS info (device dm-0): first mount of filesystem bf514789-bcec-4c15-ac9d-e4c3d19a42b2 Oct 27 08:22:37.335468 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 27 08:22:37.335476 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 27 08:22:37.335483 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 27 08:22:37.335490 kernel: BTRFS info (device dm-0): enabling free space tree Oct 27 08:22:37.335497 kernel: Invalid ELF header magic: != \u007fELF Oct 27 08:22:37.335504 kernel: loop: module loaded Oct 27 08:22:37.335511 kernel: loop0: detected capacity change from 0 to 100120 Oct 27 08:22:37.335518 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 27 08:22:37.335527 systemd[1]: Successfully made /usr/ read-only. Oct 27 08:22:37.335538 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 27 08:22:37.335546 systemd[1]: Detected virtualization kvm. Oct 27 08:22:37.335553 systemd[1]: Detected architecture x86-64. Oct 27 08:22:37.335561 systemd[1]: Running in initrd. Oct 27 08:22:37.335568 systemd[1]: No hostname configured, using default hostname. Oct 27 08:22:37.335576 systemd[1]: Hostname set to . Oct 27 08:22:37.335584 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 27 08:22:37.335591 systemd[1]: Queued start job for default target initrd.target. Oct 27 08:22:37.335598 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 27 08:22:37.335606 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 08:22:37.335613 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 08:22:37.335622 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 27 08:22:37.335630 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 27 08:22:37.335637 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 27 08:22:37.335645 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 27 08:22:37.335653 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 08:22:37.335660 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 27 08:22:37.335669 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 27 08:22:37.335676 systemd[1]: Reached target paths.target - Path Units. Oct 27 08:22:37.335684 systemd[1]: Reached target slices.target - Slice Units. Oct 27 08:22:37.335691 systemd[1]: Reached target swap.target - Swaps. Oct 27 08:22:37.335699 systemd[1]: Reached target timers.target - Timer Units. Oct 27 08:22:37.335706 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 27 08:22:37.335713 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 27 08:22:37.335722 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 27 08:22:37.335730 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 27 08:22:37.335737 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 27 08:22:37.335745 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 27 08:22:37.335752 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 08:22:37.335759 systemd[1]: Reached target sockets.target - Socket Units. Oct 27 08:22:37.335767 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 27 08:22:37.335776 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 27 08:22:37.335783 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 27 08:22:37.335790 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 27 08:22:37.335799 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 27 08:22:37.335806 systemd[1]: Starting systemd-fsck-usr.service... Oct 27 08:22:37.335814 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 27 08:22:37.335822 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 27 08:22:37.335830 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 08:22:37.335837 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 27 08:22:37.335845 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 08:22:37.335854 systemd[1]: Finished systemd-fsck-usr.service. Oct 27 08:22:37.335862 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 27 08:22:37.335885 systemd-journald[316]: Collecting audit messages is disabled. Oct 27 08:22:37.335907 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 27 08:22:37.335939 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 27 08:22:37.335949 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 27 08:22:37.335957 kernel: Bridge firewalling registered Oct 27 08:22:37.335974 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 27 08:22:37.335983 systemd-journald[316]: Journal started Oct 27 08:22:37.336002 systemd-journald[316]: Runtime Journal (/run/log/journal/4a46d62aaeb84e1eb3152cd38f3015ed) is 4.7M, max 38.3M, 33.5M free. Oct 27 08:22:37.317940 systemd-modules-load[317]: Inserted module 'br_netfilter' Oct 27 08:22:37.363962 systemd[1]: Started systemd-journald.service - Journal Service. Oct 27 08:22:37.364829 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:22:37.366079 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 08:22:37.369363 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 27 08:22:37.371234 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 27 08:22:37.376247 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 27 08:22:37.382476 systemd-tmpfiles[338]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 27 08:22:37.387628 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 27 08:22:37.388870 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 08:22:37.391761 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 27 08:22:37.393876 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 27 08:22:37.397001 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 27 08:22:37.414638 dracut-cmdline[354]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=e6ac205aca0358d0b739fe2cba6f8244850dbdc9027fd8e7442161fce065515e Oct 27 08:22:37.441282 systemd-resolved[352]: Positive Trust Anchors: Oct 27 08:22:37.441293 systemd-resolved[352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 27 08:22:37.441296 systemd-resolved[352]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 27 08:22:37.441321 systemd-resolved[352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 27 08:22:37.466674 systemd-resolved[352]: Defaulting to hostname 'linux'. Oct 27 08:22:37.467757 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 27 08:22:37.468308 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 27 08:22:37.505943 kernel: Loading iSCSI transport class v2.0-870. Oct 27 08:22:37.519938 kernel: iscsi: registered transport (tcp) Oct 27 08:22:37.549793 kernel: iscsi: registered transport (qla4xxx) Oct 27 08:22:37.549847 kernel: QLogic iSCSI HBA Driver Oct 27 08:22:37.573307 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 27 08:22:37.593637 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 27 08:22:37.598024 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 27 08:22:37.638078 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 27 08:22:37.639618 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 27 08:22:37.642009 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 27 08:22:37.666368 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 27 08:22:37.669781 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 08:22:37.695629 systemd-udevd[603]: Using default interface naming scheme 'v257'. Oct 27 08:22:37.703637 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 08:22:37.708082 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 27 08:22:37.711509 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 27 08:22:37.716511 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 27 08:22:37.728118 dracut-pre-trigger[689]: rd.md=0: removing MD RAID activation Oct 27 08:22:37.747601 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 27 08:22:37.749843 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 27 08:22:37.750939 systemd-networkd[695]: lo: Link UP Oct 27 08:22:37.750942 systemd-networkd[695]: lo: Gained carrier Oct 27 08:22:37.754436 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 27 08:22:37.755648 systemd[1]: Reached target network.target - Network. Oct 27 08:22:37.803980 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 08:22:37.808201 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 27 08:22:37.907634 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Oct 27 08:22:37.919705 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Oct 27 08:22:37.931130 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Oct 27 08:22:37.946056 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Oct 27 08:22:37.949204 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 27 08:22:37.961935 kernel: cryptd: max_cpu_qlen set to 1000 Oct 27 08:22:37.968926 disk-uuid[766]: Primary Header is updated. Oct 27 08:22:37.968926 disk-uuid[766]: Secondary Entries is updated. Oct 27 08:22:37.968926 disk-uuid[766]: Secondary Header is updated. Oct 27 08:22:37.991940 systemd-networkd[695]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 08:22:37.991950 systemd-networkd[695]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 27 08:22:37.993400 systemd-networkd[695]: eth1: Link UP Oct 27 08:22:37.993825 systemd-networkd[695]: eth1: Gained carrier Oct 27 08:22:37.993836 systemd-networkd[695]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 08:22:37.998043 systemd-networkd[695]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 08:22:37.998047 systemd-networkd[695]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 27 08:22:37.998548 systemd-networkd[695]: eth0: Link UP Oct 27 08:22:38.003530 systemd-networkd[695]: eth0: Gained carrier Oct 27 08:22:38.003543 systemd-networkd[695]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 08:22:38.004339 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 27 08:22:38.004466 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:22:38.006104 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 08:22:38.011160 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 08:22:38.021306 systemd-networkd[695]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Oct 27 08:22:38.041382 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Oct 27 08:22:38.041435 kernel: AES CTR mode by8 optimization enabled Oct 27 08:22:38.064287 systemd-networkd[695]: eth0: DHCPv4 address 46.62.164.160/32, gateway 172.31.1.1 acquired from 172.31.1.1 Oct 27 08:22:38.107051 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:22:38.109692 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 27 08:22:38.110695 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 27 08:22:38.111775 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 08:22:38.113041 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 27 08:22:38.115859 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 27 08:22:38.139173 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 27 08:22:39.043400 disk-uuid[769]: Warning: The kernel is still using the old partition table. Oct 27 08:22:39.043400 disk-uuid[769]: The new table will be used at the next reboot or after you Oct 27 08:22:39.043400 disk-uuid[769]: run partprobe(8) or kpartx(8) Oct 27 08:22:39.043400 disk-uuid[769]: The operation has completed successfully. Oct 27 08:22:39.053692 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 27 08:22:39.053855 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 27 08:22:39.057271 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 27 08:22:39.105016 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (856) Oct 27 08:22:39.110750 kernel: BTRFS info (device sda6): first mount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:22:39.110807 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 27 08:22:39.121190 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 27 08:22:39.121237 kernel: BTRFS info (device sda6): turning on async discard Oct 27 08:22:39.123390 kernel: BTRFS info (device sda6): enabling free space tree Oct 27 08:22:39.124369 systemd-networkd[695]: eth0: Gained IPv6LL Oct 27 08:22:39.136956 kernel: BTRFS info (device sda6): last unmount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:22:39.137701 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 27 08:22:39.141052 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 27 08:22:39.317379 systemd-networkd[695]: eth1: Gained IPv6LL Oct 27 08:22:39.335205 ignition[875]: Ignition 2.22.0 Oct 27 08:22:39.335223 ignition[875]: Stage: fetch-offline Oct 27 08:22:39.335284 ignition[875]: no configs at "/usr/lib/ignition/base.d" Oct 27 08:22:39.335308 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 27 08:22:39.338249 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 27 08:22:39.335489 ignition[875]: parsed url from cmdline: "" Oct 27 08:22:39.335495 ignition[875]: no config URL provided Oct 27 08:22:39.335502 ignition[875]: reading system config file "/usr/lib/ignition/user.ign" Oct 27 08:22:39.342173 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 27 08:22:39.335514 ignition[875]: no config at "/usr/lib/ignition/user.ign" Oct 27 08:22:39.335521 ignition[875]: failed to fetch config: resource requires networking Oct 27 08:22:39.335813 ignition[875]: Ignition finished successfully Oct 27 08:22:39.379797 ignition[881]: Ignition 2.22.0 Oct 27 08:22:39.379821 ignition[881]: Stage: fetch Oct 27 08:22:39.380129 ignition[881]: no configs at "/usr/lib/ignition/base.d" Oct 27 08:22:39.380143 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 27 08:22:39.380252 ignition[881]: parsed url from cmdline: "" Oct 27 08:22:39.380257 ignition[881]: no config URL provided Oct 27 08:22:39.380264 ignition[881]: reading system config file "/usr/lib/ignition/user.ign" Oct 27 08:22:39.380273 ignition[881]: no config at "/usr/lib/ignition/user.ign" Oct 27 08:22:39.380356 ignition[881]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Oct 27 08:22:39.394705 unknown[881]: fetched base config from "system" Oct 27 08:22:39.387715 ignition[881]: GET result: OK Oct 27 08:22:39.394718 unknown[881]: fetched base config from "system" Oct 27 08:22:39.387892 ignition[881]: parsing config with SHA512: 6e7d34e8a4f0d26ce94981ab8e6aaa821e2a1c6b2ee2d3ee9a56b6014f6616e0458fb8d243aea319bc085d7eebb6563e5aa641ff570273926a7bd069c80f0a4b Oct 27 08:22:39.394725 unknown[881]: fetched user config from "hetzner" Oct 27 08:22:39.395174 ignition[881]: fetch: fetch complete Oct 27 08:22:39.398427 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 27 08:22:39.395181 ignition[881]: fetch: fetch passed Oct 27 08:22:39.401128 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 27 08:22:39.395238 ignition[881]: Ignition finished successfully Oct 27 08:22:39.435610 ignition[887]: Ignition 2.22.0 Oct 27 08:22:39.435624 ignition[887]: Stage: kargs Oct 27 08:22:39.435756 ignition[887]: no configs at "/usr/lib/ignition/base.d" Oct 27 08:22:39.435764 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 27 08:22:39.436462 ignition[887]: kargs: kargs passed Oct 27 08:22:39.437745 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 27 08:22:39.436500 ignition[887]: Ignition finished successfully Oct 27 08:22:39.440022 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 27 08:22:39.462901 ignition[894]: Ignition 2.22.0 Oct 27 08:22:39.462931 ignition[894]: Stage: disks Oct 27 08:22:39.463050 ignition[894]: no configs at "/usr/lib/ignition/base.d" Oct 27 08:22:39.465164 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 27 08:22:39.463056 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 27 08:22:39.466490 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 27 08:22:39.463930 ignition[894]: disks: disks passed Oct 27 08:22:39.467269 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 27 08:22:39.463968 ignition[894]: Ignition finished successfully Oct 27 08:22:39.468552 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 27 08:22:39.469821 systemd[1]: Reached target sysinit.target - System Initialization. Oct 27 08:22:39.470809 systemd[1]: Reached target basic.target - Basic System. Oct 27 08:22:39.472725 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 27 08:22:39.502089 systemd-fsck[902]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Oct 27 08:22:39.504327 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 27 08:22:39.506982 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 27 08:22:39.656993 kernel: EXT4-fs (sda9): mounted filesystem e90e2fe3-e1db-4bff-abac-c8d1d032f674 r/w with ordered data mode. Quota mode: none. Oct 27 08:22:39.657979 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 27 08:22:39.660650 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 27 08:22:39.665713 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 27 08:22:39.669302 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 27 08:22:39.685128 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 27 08:22:39.690449 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 27 08:22:39.692432 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 27 08:22:39.698396 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 27 08:22:39.701430 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (911) Oct 27 08:22:39.704805 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 27 08:22:39.714748 kernel: BTRFS info (device sda6): first mount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:22:39.714808 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 27 08:22:39.727973 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 27 08:22:39.728038 kernel: BTRFS info (device sda6): turning on async discard Oct 27 08:22:39.728060 kernel: BTRFS info (device sda6): enabling free space tree Oct 27 08:22:39.736581 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 27 08:22:39.838879 coreos-metadata[913]: Oct 27 08:22:39.838 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Oct 27 08:22:39.841410 coreos-metadata[913]: Oct 27 08:22:39.841 INFO Fetch successful Oct 27 08:22:39.843983 coreos-metadata[913]: Oct 27 08:22:39.841 INFO wrote hostname ci-9999-9-9-k-f136f833c6 to /sysroot/etc/hostname Oct 27 08:22:39.845223 initrd-setup-root[938]: cut: /sysroot/etc/passwd: No such file or directory Oct 27 08:22:39.846317 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 27 08:22:39.853575 initrd-setup-root[946]: cut: /sysroot/etc/group: No such file or directory Oct 27 08:22:39.858371 initrd-setup-root[953]: cut: /sysroot/etc/shadow: No such file or directory Oct 27 08:22:39.864425 initrd-setup-root[960]: cut: /sysroot/etc/gshadow: No such file or directory Oct 27 08:22:39.984224 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 27 08:22:39.987795 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 27 08:22:39.992115 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 27 08:22:40.014581 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 27 08:22:40.019388 kernel: BTRFS info (device sda6): last unmount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:22:40.046632 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 27 08:22:40.064131 ignition[1028]: INFO : Ignition 2.22.0 Oct 27 08:22:40.064131 ignition[1028]: INFO : Stage: mount Oct 27 08:22:40.066513 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 08:22:40.066513 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 27 08:22:40.066513 ignition[1028]: INFO : mount: mount passed Oct 27 08:22:40.066513 ignition[1028]: INFO : Ignition finished successfully Oct 27 08:22:40.067435 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 27 08:22:40.072133 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 27 08:22:40.663346 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 27 08:22:40.696973 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1039) Oct 27 08:22:40.697043 kernel: BTRFS info (device sda6): first mount of filesystem 3c7e1d30-69bc-4811-963d-029e55854883 Oct 27 08:22:40.699627 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 27 08:22:40.709586 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 27 08:22:40.709635 kernel: BTRFS info (device sda6): turning on async discard Oct 27 08:22:40.713677 kernel: BTRFS info (device sda6): enabling free space tree Oct 27 08:22:40.717258 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 27 08:22:40.756564 ignition[1055]: INFO : Ignition 2.22.0 Oct 27 08:22:40.756564 ignition[1055]: INFO : Stage: files Oct 27 08:22:40.758986 ignition[1055]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 08:22:40.758986 ignition[1055]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 27 08:22:40.758986 ignition[1055]: DEBUG : files: compiled without relabeling support, skipping Oct 27 08:22:40.762642 ignition[1055]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 27 08:22:40.762642 ignition[1055]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 27 08:22:40.765492 ignition[1055]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 27 08:22:40.765492 ignition[1055]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 27 08:22:40.768396 ignition[1055]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 27 08:22:40.765833 unknown[1055]: wrote ssh authorized keys file for user: core Oct 27 08:22:40.771412 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 27 08:22:40.771412 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 27 08:22:40.972343 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 27 08:22:41.281208 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 27 08:22:41.281208 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 27 08:22:41.292459 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 27 08:22:41.292459 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 27 08:22:41.292459 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 27 08:22:41.292459 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 27 08:22:41.292459 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 27 08:22:41.292459 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 27 08:22:41.292459 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 27 08:22:41.292459 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 27 08:22:41.292459 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 27 08:22:41.292459 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 27 08:22:41.292459 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 27 08:22:41.292459 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 27 08:22:41.292459 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Oct 27 08:22:41.661225 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 27 08:22:41.929211 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 27 08:22:41.929211 ignition[1055]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 27 08:22:41.933311 ignition[1055]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 27 08:22:41.933311 ignition[1055]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 27 08:22:41.933311 ignition[1055]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 27 08:22:41.933311 ignition[1055]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 27 08:22:41.933311 ignition[1055]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Oct 27 08:22:41.940427 ignition[1055]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Oct 27 08:22:41.940427 ignition[1055]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 27 08:22:41.940427 ignition[1055]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Oct 27 08:22:41.940427 ignition[1055]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Oct 27 08:22:41.940427 ignition[1055]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 27 08:22:41.940427 ignition[1055]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 27 08:22:41.940427 ignition[1055]: INFO : files: files passed Oct 27 08:22:41.940427 ignition[1055]: INFO : Ignition finished successfully Oct 27 08:22:41.937114 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 27 08:22:41.941046 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 27 08:22:41.949007 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 27 08:22:41.952390 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 27 08:22:41.953093 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 27 08:22:41.959861 initrd-setup-root-after-ignition[1088]: grep: Oct 27 08:22:41.960862 initrd-setup-root-after-ignition[1092]: grep: Oct 27 08:22:41.960862 initrd-setup-root-after-ignition[1088]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 27 08:22:41.960862 initrd-setup-root-after-ignition[1088]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 27 08:22:41.964143 initrd-setup-root-after-ignition[1092]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 27 08:22:41.962379 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 27 08:22:41.963506 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 27 08:22:41.965306 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 27 08:22:42.011573 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 27 08:22:42.011660 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 27 08:22:42.013130 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 27 08:22:42.014057 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 27 08:22:42.015354 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 27 08:22:42.017010 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 27 08:22:42.032325 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 27 08:22:42.034147 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 27 08:22:42.056299 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 27 08:22:42.056447 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 27 08:22:42.057178 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 08:22:42.058809 systemd[1]: Stopped target timers.target - Timer Units. Oct 27 08:22:42.060053 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 27 08:22:42.060205 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 27 08:22:42.062047 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 27 08:22:42.063110 systemd[1]: Stopped target basic.target - Basic System. Oct 27 08:22:42.064571 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 27 08:22:42.065877 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 27 08:22:42.067019 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 27 08:22:42.068182 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 27 08:22:42.069486 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 27 08:22:42.071044 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 27 08:22:42.072389 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 27 08:22:42.074139 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 27 08:22:42.075991 systemd[1]: Stopped target swap.target - Swaps. Oct 27 08:22:42.077705 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 27 08:22:42.077847 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 27 08:22:42.079880 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 27 08:22:42.080731 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 08:22:42.082044 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 27 08:22:42.082482 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 08:22:42.083545 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 27 08:22:42.083733 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 27 08:22:42.085579 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 27 08:22:42.085779 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 27 08:22:42.086823 systemd[1]: ignition-files.service: Deactivated successfully. Oct 27 08:22:42.087024 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 27 08:22:42.088304 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 27 08:22:42.088545 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 27 08:22:42.092127 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 27 08:22:42.098232 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 27 08:22:42.101465 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 27 08:22:42.101618 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 08:22:42.104144 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 27 08:22:42.104250 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 08:22:42.105338 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 27 08:22:42.105589 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 27 08:22:42.126425 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 27 08:22:42.126547 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 27 08:22:42.139944 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 27 08:22:42.151858 ignition[1112]: INFO : Ignition 2.22.0 Oct 27 08:22:42.151858 ignition[1112]: INFO : Stage: umount Oct 27 08:22:42.158626 ignition[1112]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 08:22:42.158626 ignition[1112]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 27 08:22:42.158626 ignition[1112]: INFO : umount: umount passed Oct 27 08:22:42.158626 ignition[1112]: INFO : Ignition finished successfully Oct 27 08:22:42.156878 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 27 08:22:42.157017 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 27 08:22:42.159610 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 27 08:22:42.159677 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 27 08:22:42.161138 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 27 08:22:42.161187 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 27 08:22:42.166526 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 27 08:22:42.166573 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 27 08:22:42.167204 systemd[1]: Stopped target network.target - Network. Oct 27 08:22:42.170270 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 27 08:22:42.170419 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 27 08:22:42.171453 systemd[1]: Stopped target paths.target - Path Units. Oct 27 08:22:42.175200 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 27 08:22:42.179089 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 08:22:42.180456 systemd[1]: Stopped target slices.target - Slice Units. Oct 27 08:22:42.182161 systemd[1]: Stopped target sockets.target - Socket Units. Oct 27 08:22:42.183751 systemd[1]: iscsid.socket: Deactivated successfully. Oct 27 08:22:42.183805 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 27 08:22:42.185591 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 27 08:22:42.185640 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 27 08:22:42.187519 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 27 08:22:42.187593 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 27 08:22:42.189161 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 27 08:22:42.189224 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 27 08:22:42.191164 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 27 08:22:42.192898 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 27 08:22:42.194437 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 27 08:22:42.194525 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 27 08:22:42.199300 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 27 08:22:42.199459 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 27 08:22:42.201822 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 27 08:22:42.202060 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 27 08:22:42.205518 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 27 08:22:42.205651 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 27 08:22:42.209848 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 27 08:22:42.211501 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 27 08:22:42.211546 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 27 08:22:42.215112 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 27 08:22:42.216782 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 27 08:22:42.216862 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 27 08:22:42.220519 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 27 08:22:42.220589 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 27 08:22:42.223747 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 27 08:22:42.223803 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 27 08:22:42.225434 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 08:22:42.239442 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 27 08:22:42.239604 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 08:22:42.241621 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 27 08:22:42.241691 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 27 08:22:42.244191 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 27 08:22:42.244232 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 08:22:42.244985 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 27 08:22:42.245063 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 27 08:22:42.248014 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 27 08:22:42.248090 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 27 08:22:42.248807 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 27 08:22:42.248867 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 27 08:22:42.252888 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 27 08:22:42.258301 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 27 08:22:42.258393 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 27 08:22:42.260241 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 27 08:22:42.260299 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 08:22:42.261865 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 27 08:22:42.263966 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 27 08:22:42.265177 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 27 08:22:42.265232 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 08:22:42.267081 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 27 08:22:42.267144 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:22:42.271859 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 27 08:22:42.273249 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 27 08:22:42.284201 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 27 08:22:42.284316 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 27 08:22:42.288099 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 27 08:22:42.289869 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 27 08:22:42.305169 systemd[1]: Switching root. Oct 27 08:22:42.352753 systemd-journald[316]: Journal stopped Oct 27 08:22:43.403372 systemd-journald[316]: Received SIGTERM from PID 1 (systemd). Oct 27 08:22:43.403445 kernel: SELinux: policy capability network_peer_controls=1 Oct 27 08:22:43.403466 kernel: SELinux: policy capability open_perms=1 Oct 27 08:22:43.403486 kernel: SELinux: policy capability extended_socket_class=1 Oct 27 08:22:43.403506 kernel: SELinux: policy capability always_check_network=0 Oct 27 08:22:43.403530 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 27 08:22:43.403546 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 27 08:22:43.403560 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 27 08:22:43.403581 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 27 08:22:43.403597 kernel: SELinux: policy capability userspace_initial_context=0 Oct 27 08:22:43.403611 kernel: audit: type=1403 audit(1761553362.459:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 27 08:22:43.403635 systemd[1]: Successfully loaded SELinux policy in 61.669ms. Oct 27 08:22:43.403660 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.138ms. Oct 27 08:22:43.403680 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 27 08:22:43.403699 systemd[1]: Detected virtualization kvm. Oct 27 08:22:43.403720 systemd[1]: Detected architecture x86-64. Oct 27 08:22:43.403737 systemd[1]: Detected first boot. Oct 27 08:22:43.403755 systemd[1]: Hostname set to . Oct 27 08:22:43.403775 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 27 08:22:43.403792 zram_generator::config[1156]: No configuration found. Oct 27 08:22:43.403810 kernel: Guest personality initialized and is inactive Oct 27 08:22:43.403825 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 27 08:22:43.403840 kernel: Initialized host personality Oct 27 08:22:43.403855 kernel: NET: Registered PF_VSOCK protocol family Oct 27 08:22:43.403874 systemd[1]: Populated /etc with preset unit settings. Oct 27 08:22:43.403891 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 27 08:22:43.403906 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 27 08:22:43.404984 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 27 08:22:43.405027 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 27 08:22:43.405046 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 27 08:22:43.405062 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 27 08:22:43.405083 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 27 08:22:43.405101 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 27 08:22:43.405121 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 27 08:22:43.405138 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 27 08:22:43.405153 systemd[1]: Created slice user.slice - User and Session Slice. Oct 27 08:22:43.405171 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 08:22:43.405186 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 08:22:43.405201 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 27 08:22:43.405217 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 27 08:22:43.405233 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 27 08:22:43.405251 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 27 08:22:43.405266 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 27 08:22:43.405283 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 08:22:43.405298 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 27 08:22:43.405315 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 27 08:22:43.405333 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 27 08:22:43.405348 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 27 08:22:43.405368 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 27 08:22:43.405383 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 08:22:43.405399 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 27 08:22:43.405416 systemd[1]: Reached target slices.target - Slice Units. Oct 27 08:22:43.405433 systemd[1]: Reached target swap.target - Swaps. Oct 27 08:22:43.405451 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 27 08:22:43.405470 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 27 08:22:43.405490 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 27 08:22:43.405506 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 27 08:22:43.405522 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 27 08:22:43.407408 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 08:22:43.407428 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 27 08:22:43.407439 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 27 08:22:43.407449 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 27 08:22:43.407464 systemd[1]: Mounting media.mount - External Media Directory... Oct 27 08:22:43.407473 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:22:43.407482 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 27 08:22:43.407492 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 27 08:22:43.407501 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 27 08:22:43.407511 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 27 08:22:43.407521 systemd[1]: Reached target machines.target - Containers. Oct 27 08:22:43.407532 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 27 08:22:43.407542 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 08:22:43.407551 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 27 08:22:43.407560 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 27 08:22:43.407569 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 08:22:43.407578 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 27 08:22:43.407587 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 08:22:43.407597 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 27 08:22:43.407606 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 08:22:43.407615 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 27 08:22:43.407625 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 27 08:22:43.407634 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 27 08:22:43.407644 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 27 08:22:43.407655 systemd[1]: Stopped systemd-fsck-usr.service. Oct 27 08:22:43.407664 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 08:22:43.407674 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 27 08:22:43.407683 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 27 08:22:43.407694 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 27 08:22:43.407704 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 27 08:22:43.407713 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 27 08:22:43.407722 kernel: fuse: init (API version 7.41) Oct 27 08:22:43.407732 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 27 08:22:43.407741 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:22:43.407751 kernel: ACPI: bus type drm_connector registered Oct 27 08:22:43.407761 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 27 08:22:43.407770 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 27 08:22:43.407779 systemd[1]: Mounted media.mount - External Media Directory. Oct 27 08:22:43.407788 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 27 08:22:43.407798 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 27 08:22:43.407807 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 27 08:22:43.407816 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 08:22:43.407853 systemd-journald[1226]: Collecting audit messages is disabled. Oct 27 08:22:43.407879 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 27 08:22:43.407889 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 27 08:22:43.407898 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 08:22:43.407907 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 08:22:43.412395 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 27 08:22:43.412411 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 27 08:22:43.412424 systemd-journald[1226]: Journal started Oct 27 08:22:43.412450 systemd-journald[1226]: Runtime Journal (/run/log/journal/4a46d62aaeb84e1eb3152cd38f3015ed) is 4.7M, max 38.3M, 33.5M free. Oct 27 08:22:43.104709 systemd[1]: Queued start job for default target multi-user.target. Oct 27 08:22:43.126447 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Oct 27 08:22:43.127107 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 27 08:22:43.415208 systemd[1]: Started systemd-journald.service - Journal Service. Oct 27 08:22:43.417118 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 08:22:43.417908 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 08:22:43.418626 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 27 08:22:43.418968 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 27 08:22:43.420472 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 08:22:43.420595 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 08:22:43.421410 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 27 08:22:43.423006 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 27 08:22:43.425312 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 27 08:22:43.426362 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 27 08:22:43.434747 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 27 08:22:43.438134 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 27 08:22:43.439688 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 27 08:22:43.443073 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 27 08:22:43.447980 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 27 08:22:43.448529 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 27 08:22:43.448555 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 27 08:22:43.451194 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 27 08:22:43.454286 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 08:22:43.457042 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 27 08:22:43.458818 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 27 08:22:43.460072 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 27 08:22:43.461001 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 27 08:22:43.461486 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 27 08:22:43.464004 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 27 08:22:43.468874 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 27 08:22:43.477087 systemd-journald[1226]: Time spent on flushing to /var/log/journal/4a46d62aaeb84e1eb3152cd38f3015ed is 50.375ms for 1160 entries. Oct 27 08:22:43.477087 systemd-journald[1226]: System Journal (/var/log/journal/4a46d62aaeb84e1eb3152cd38f3015ed) is 8M, max 588.1M, 580.1M free. Oct 27 08:22:43.536062 systemd-journald[1226]: Received client request to flush runtime journal. Oct 27 08:22:43.536101 kernel: loop1: detected capacity change from 0 to 128048 Oct 27 08:22:43.471682 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 27 08:22:43.472980 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 08:22:43.475336 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 27 08:22:43.476533 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 27 08:22:43.486696 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 27 08:22:43.487645 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 27 08:22:43.492519 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 27 08:22:43.494076 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 27 08:22:43.521727 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Oct 27 08:22:43.521737 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Oct 27 08:22:43.525449 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 27 08:22:43.529055 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 27 08:22:43.537400 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 27 08:22:43.549076 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 27 08:22:43.556187 kernel: loop2: detected capacity change from 0 to 110984 Oct 27 08:22:43.565355 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 27 08:22:43.569248 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 27 08:22:43.574173 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 27 08:22:43.588935 kernel: loop3: detected capacity change from 0 to 8 Oct 27 08:22:43.589041 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 27 08:22:43.602250 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Oct 27 08:22:43.602465 systemd-tmpfiles[1303]: ACLs are not supported, ignoring. Oct 27 08:22:43.605879 kernel: loop4: detected capacity change from 0 to 219144 Oct 27 08:22:43.607606 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 08:22:43.632564 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 27 08:22:43.640978 kernel: loop5: detected capacity change from 0 to 128048 Oct 27 08:22:43.658930 kernel: loop6: detected capacity change from 0 to 110984 Oct 27 08:22:43.674953 kernel: loop7: detected capacity change from 0 to 8 Oct 27 08:22:43.679983 kernel: loop1: detected capacity change from 0 to 219144 Oct 27 08:22:43.694133 (sd-merge)[1313]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-hetzner.raw'. Oct 27 08:22:43.698666 (sd-merge)[1313]: Merged extensions into '/usr'. Oct 27 08:22:43.700204 systemd-resolved[1301]: Positive Trust Anchors: Oct 27 08:22:43.700606 systemd-resolved[1301]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 27 08:22:43.700670 systemd-resolved[1301]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 27 08:22:43.700752 systemd-resolved[1301]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 27 08:22:43.703147 systemd[1]: Reload requested from client PID 1281 ('systemd-sysext') (unit systemd-sysext.service)... Oct 27 08:22:43.703165 systemd[1]: Reloading... Oct 27 08:22:43.719223 systemd-resolved[1301]: Using system hostname 'ci-9999-9-9-k-f136f833c6'. Oct 27 08:22:43.770938 zram_generator::config[1342]: No configuration found. Oct 27 08:22:43.931338 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 27 08:22:43.931749 systemd[1]: Reloading finished in 228 ms. Oct 27 08:22:43.946822 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 27 08:22:43.947641 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 27 08:22:43.948369 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 27 08:22:43.951525 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 27 08:22:43.964021 systemd[1]: Starting ensure-sysext.service... Oct 27 08:22:43.967075 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 27 08:22:43.978042 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 08:22:43.988586 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 27 08:22:43.988850 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 27 08:22:43.989153 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 27 08:22:43.989397 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 27 08:22:43.989655 systemd[1]: Reload requested from client PID 1386 ('systemctl') (unit ensure-sysext.service)... Oct 27 08:22:43.989816 systemd[1]: Reloading... Oct 27 08:22:43.990034 systemd-tmpfiles[1387]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 27 08:22:43.990211 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Oct 27 08:22:43.990246 systemd-tmpfiles[1387]: ACLs are not supported, ignoring. Oct 27 08:22:43.995280 systemd-tmpfiles[1387]: Detected autofs mount point /boot during canonicalization of boot. Oct 27 08:22:43.996938 systemd-tmpfiles[1387]: Skipping /boot Oct 27 08:22:44.006626 systemd-tmpfiles[1387]: Detected autofs mount point /boot during canonicalization of boot. Oct 27 08:22:44.007440 systemd-tmpfiles[1387]: Skipping /boot Oct 27 08:22:44.024272 systemd-udevd[1388]: Using default interface naming scheme 'v257'. Oct 27 08:22:44.064175 zram_generator::config[1420]: No configuration found. Oct 27 08:22:44.157939 kernel: mousedev: PS/2 mouse device common for all mice Oct 27 08:22:44.183971 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input5 Oct 27 08:22:44.194931 kernel: ACPI: button: Power Button [PWRF] Oct 27 08:22:44.249968 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Oct 27 08:22:44.255624 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Oct 27 08:22:44.266953 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 27 08:22:44.267238 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 27 08:22:44.277090 kernel: Console: switching to colour dummy device 80x25 Oct 27 08:22:44.279206 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 27 08:22:44.279240 kernel: [drm] features: -context_init Oct 27 08:22:44.282991 kernel: [drm] number of scanouts: 1 Oct 27 08:22:44.283040 kernel: [drm] number of cap sets: 0 Oct 27 08:22:44.284934 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Oct 27 08:22:44.285929 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Oct 27 08:22:44.288770 kernel: Console: switching to colour frame buffer device 160x50 Oct 27 08:22:44.292943 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 27 08:22:44.305946 kernel: EDAC MC: Ver: 3.0.0 Oct 27 08:22:44.344533 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Oct 27 08:22:44.346253 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 27 08:22:44.346376 systemd[1]: Reloading finished in 356 ms. Oct 27 08:22:44.358684 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 08:22:44.374591 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 08:22:44.430685 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Oct 27 08:22:44.449470 systemd[1]: Finished ensure-sysext.service. Oct 27 08:22:44.459777 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:22:44.460730 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 27 08:22:44.462071 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 27 08:22:44.463707 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 08:22:44.465682 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 27 08:22:44.471586 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 08:22:44.473952 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 27 08:22:44.475154 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 08:22:44.475987 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 08:22:44.477406 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 08:22:44.478559 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 27 08:22:44.479680 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 08:22:44.480843 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 27 08:22:44.484305 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 27 08:22:44.487600 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 27 08:22:44.493695 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 27 08:22:44.502951 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 08:22:44.503043 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 27 08:22:44.511723 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 27 08:22:44.516335 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 27 08:22:44.518232 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 27 08:22:44.518490 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 08:22:44.518597 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 08:22:44.533706 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 27 08:22:44.538508 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 08:22:44.543384 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 08:22:44.548817 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 27 08:22:44.555824 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 08:22:44.556154 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 08:22:44.560617 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 27 08:22:44.606145 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 27 08:22:44.634059 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 27 08:22:44.635013 systemd[1]: Reached target time-set.target - System Time Set. Oct 27 08:22:44.636650 augenrules[1558]: No rules Oct 27 08:22:44.638002 systemd[1]: audit-rules.service: Deactivated successfully. Oct 27 08:22:44.638266 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 27 08:22:44.664152 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 27 08:22:44.667332 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 27 08:22:44.669836 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 08:22:44.688568 systemd-networkd[1521]: lo: Link UP Oct 27 08:22:44.688576 systemd-networkd[1521]: lo: Gained carrier Oct 27 08:22:44.695431 systemd-networkd[1521]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 08:22:44.695440 systemd-networkd[1521]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 27 08:22:44.696607 systemd-networkd[1521]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 08:22:44.696611 systemd-networkd[1521]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 27 08:22:44.697378 systemd-networkd[1521]: eth0: Link UP Oct 27 08:22:44.697518 systemd-networkd[1521]: eth1: Link UP Oct 27 08:22:44.697685 systemd-networkd[1521]: eth0: Gained carrier Oct 27 08:22:44.697697 systemd-networkd[1521]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 08:22:44.697972 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 27 08:22:44.698515 systemd[1]: Reached target network.target - Network. Oct 27 08:22:44.701082 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 27 08:22:44.703282 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 27 08:22:44.703571 systemd-networkd[1521]: eth1: Gained carrier Oct 27 08:22:44.703585 systemd-networkd[1521]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 27 08:22:44.729207 systemd-networkd[1521]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Oct 27 08:22:44.730805 systemd-timesyncd[1522]: Network configuration changed, trying to establish connection. Oct 27 08:22:44.733655 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 27 08:22:44.768039 systemd-networkd[1521]: eth0: DHCPv4 address 46.62.164.160/32, gateway 172.31.1.1 acquired from 172.31.1.1 Oct 27 08:22:44.769014 systemd-timesyncd[1522]: Network configuration changed, trying to establish connection. Oct 27 08:22:45.023455 ldconfig[1514]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 27 08:22:45.026361 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 27 08:22:45.028165 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 27 08:22:45.048516 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 27 08:22:45.051233 systemd[1]: Reached target sysinit.target - System Initialization. Oct 27 08:22:45.051721 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 27 08:22:45.052746 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 27 08:22:45.053490 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 27 08:22:45.054156 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 27 08:22:45.054860 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 27 08:22:45.055523 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 27 08:22:45.056188 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 27 08:22:45.056224 systemd[1]: Reached target paths.target - Path Units. Oct 27 08:22:45.057031 systemd[1]: Reached target timers.target - Timer Units. Oct 27 08:22:45.058800 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 27 08:22:45.060444 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 27 08:22:45.065370 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 27 08:22:45.069212 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 27 08:22:45.069851 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 27 08:22:45.083594 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 27 08:22:45.084454 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 27 08:22:45.085599 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 27 08:22:45.087789 systemd[1]: Reached target sockets.target - Socket Units. Oct 27 08:22:45.088186 systemd[1]: Reached target basic.target - Basic System. Oct 27 08:22:45.088555 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 27 08:22:45.088583 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 27 08:22:45.089690 systemd[1]: Starting containerd.service - containerd container runtime... Oct 27 08:22:45.093120 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 27 08:22:45.098580 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 27 08:22:45.106250 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 27 08:22:45.112104 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 27 08:22:45.117094 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 27 08:22:45.119285 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 27 08:22:45.123259 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 27 08:22:45.132105 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 27 08:22:45.137228 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 27 08:22:45.141117 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Oct 27 08:22:45.146286 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 27 08:22:45.147247 oslogin_cache_refresh[1587]: Refreshing passwd entry cache Oct 27 08:22:45.148350 google_oslogin_nss_cache[1587]: oslogin_cache_refresh[1587]: Refreshing passwd entry cache Oct 27 08:22:45.150724 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 27 08:22:45.155453 coreos-metadata[1580]: Oct 27 08:22:45.154 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Oct 27 08:22:45.156236 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 27 08:22:45.163081 coreos-metadata[1580]: Oct 27 08:22:45.157 INFO Fetch successful Oct 27 08:22:45.163081 coreos-metadata[1580]: Oct 27 08:22:45.157 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Oct 27 08:22:45.163081 coreos-metadata[1580]: Oct 27 08:22:45.157 INFO Fetch successful Oct 27 08:22:45.158855 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 27 08:22:45.166092 google_oslogin_nss_cache[1587]: oslogin_cache_refresh[1587]: Failure getting users, quitting Oct 27 08:22:45.166092 google_oslogin_nss_cache[1587]: oslogin_cache_refresh[1587]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 27 08:22:45.166092 google_oslogin_nss_cache[1587]: oslogin_cache_refresh[1587]: Refreshing group entry cache Oct 27 08:22:45.165610 oslogin_cache_refresh[1587]: Failure getting users, quitting Oct 27 08:22:45.165629 oslogin_cache_refresh[1587]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 27 08:22:45.165676 oslogin_cache_refresh[1587]: Refreshing group entry cache Oct 27 08:22:45.166565 google_oslogin_nss_cache[1587]: oslogin_cache_refresh[1587]: Failure getting groups, quitting Oct 27 08:22:45.166565 google_oslogin_nss_cache[1587]: oslogin_cache_refresh[1587]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 27 08:22:45.166519 oslogin_cache_refresh[1587]: Failure getting groups, quitting Oct 27 08:22:45.166527 oslogin_cache_refresh[1587]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 27 08:22:45.170959 jq[1584]: false Oct 27 08:22:45.171856 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 27 08:22:45.174936 systemd[1]: Starting update-engine.service - Update Engine... Oct 27 08:22:45.179640 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 27 08:22:45.183954 extend-filesystems[1586]: Found /dev/sda6 Oct 27 08:22:45.187905 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 27 08:22:45.195258 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 27 08:22:45.198152 extend-filesystems[1586]: Found /dev/sda9 Oct 27 08:22:45.198447 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 27 08:22:45.199046 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 27 08:22:45.199232 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 27 08:22:45.210603 jq[1603]: true Oct 27 08:22:45.214525 extend-filesystems[1586]: Checking size of /dev/sda9 Oct 27 08:22:45.222208 update_engine[1599]: I20251027 08:22:45.218864 1599 main.cc:92] Flatcar Update Engine starting Oct 27 08:22:45.223485 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 27 08:22:45.223645 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 27 08:22:45.241249 extend-filesystems[1586]: Resized partition /dev/sda9 Oct 27 08:22:45.241560 (ntainerd)[1628]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 27 08:22:45.252833 jq[1619]: true Oct 27 08:22:45.255542 systemd[1]: motdgen.service: Deactivated successfully. Oct 27 08:22:45.255706 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 27 08:22:45.268297 tar[1611]: linux-amd64/LICENSE Oct 27 08:22:45.268513 tar[1611]: linux-amd64/helm Oct 27 08:22:45.269720 extend-filesystems[1642]: resize2fs 1.47.3 (8-Jul-2025) Oct 27 08:22:45.288937 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 8410107 blocks Oct 27 08:22:45.294682 systemd-logind[1593]: New seat seat0. Oct 27 08:22:45.296794 systemd-logind[1593]: Watching system buttons on /dev/input/event3 (Power Button) Oct 27 08:22:45.296885 systemd-logind[1593]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 27 08:22:45.298482 systemd[1]: Started systemd-logind.service - User Login Management. Oct 27 08:22:45.308462 dbus-daemon[1581]: [system] SELinux support is enabled Oct 27 08:22:45.308865 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 27 08:22:45.327426 update_engine[1599]: I20251027 08:22:45.312520 1599 update_check_scheduler.cc:74] Next update check in 6m6s Oct 27 08:22:45.325839 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 27 08:22:45.325865 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 27 08:22:45.326978 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 27 08:22:45.327005 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 27 08:22:45.329003 systemd[1]: Started update-engine.service - Update Engine. Oct 27 08:22:45.332013 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 27 08:22:45.332783 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 27 08:22:45.338286 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 27 08:22:45.333138 dbus-daemon[1581]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 27 08:22:45.416270 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 27 08:22:45.422621 bash[1661]: Updated "/home/core/.ssh/authorized_keys" Oct 27 08:22:45.423228 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 27 08:22:45.427760 systemd[1]: Starting sshkeys.service... Oct 27 08:22:45.465936 kernel: EXT4-fs (sda9): resized filesystem to 8410107 Oct 27 08:22:45.487659 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 27 08:22:45.492211 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 27 08:22:45.521469 extend-filesystems[1642]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Oct 27 08:22:45.521469 extend-filesystems[1642]: old_desc_blocks = 1, new_desc_blocks = 5 Oct 27 08:22:45.521469 extend-filesystems[1642]: The filesystem on /dev/sda9 is now 8410107 (4k) blocks long. Oct 27 08:22:45.524080 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 27 08:22:45.538739 extend-filesystems[1586]: Resized filesystem in /dev/sda9 Oct 27 08:22:45.549486 sshd_keygen[1601]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 27 08:22:45.524256 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 27 08:22:45.550737 coreos-metadata[1673]: Oct 27 08:22:45.550 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Oct 27 08:22:45.550737 coreos-metadata[1673]: Oct 27 08:22:45.550 INFO Fetch successful Oct 27 08:22:45.552323 unknown[1673]: wrote ssh authorized keys file for user: core Oct 27 08:22:45.572322 locksmithd[1663]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 27 08:22:45.573465 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 27 08:22:45.578321 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 27 08:22:45.582443 systemd[1]: Started sshd@0-46.62.164.160:22-147.75.109.163:38594.service - OpenSSH per-connection server daemon (147.75.109.163:38594). Oct 27 08:22:45.587002 update-ssh-keys[1688]: Updated "/home/core/.ssh/authorized_keys" Oct 27 08:22:45.589036 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 27 08:22:45.602385 systemd[1]: Finished sshkeys.service. Oct 27 08:22:45.619508 systemd[1]: issuegen.service: Deactivated successfully. Oct 27 08:22:45.620853 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 27 08:22:45.625576 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 27 08:22:45.652237 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 27 08:22:45.659026 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 27 08:22:45.661228 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 27 08:22:45.662637 containerd[1628]: time="2025-10-27T08:22:45Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 27 08:22:45.663197 systemd[1]: Reached target getty.target - Login Prompts. Oct 27 08:22:45.665515 containerd[1628]: time="2025-10-27T08:22:45.665477275Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 27 08:22:45.675255 containerd[1628]: time="2025-10-27T08:22:45.675221927Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.405µs" Oct 27 08:22:45.675255 containerd[1628]: time="2025-10-27T08:22:45.675250261Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 27 08:22:45.675322 containerd[1628]: time="2025-10-27T08:22:45.675265209Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 27 08:22:45.675536 containerd[1628]: time="2025-10-27T08:22:45.675512201Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 27 08:22:45.675536 containerd[1628]: time="2025-10-27T08:22:45.675534874Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 27 08:22:45.675576 containerd[1628]: time="2025-10-27T08:22:45.675554751Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 27 08:22:45.675644 containerd[1628]: time="2025-10-27T08:22:45.675623471Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 27 08:22:45.675665 containerd[1628]: time="2025-10-27T08:22:45.675640562Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 27 08:22:45.675878 containerd[1628]: time="2025-10-27T08:22:45.675855115Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 27 08:22:45.675878 containerd[1628]: time="2025-10-27T08:22:45.675873860Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 27 08:22:45.675945 containerd[1628]: time="2025-10-27T08:22:45.675883167Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 27 08:22:45.675945 containerd[1628]: time="2025-10-27T08:22:45.675890150Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 27 08:22:45.676015 containerd[1628]: time="2025-10-27T08:22:45.675972375Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 27 08:22:45.676358 containerd[1628]: time="2025-10-27T08:22:45.676334484Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 27 08:22:45.676385 containerd[1628]: time="2025-10-27T08:22:45.676366132Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 27 08:22:45.676406 containerd[1628]: time="2025-10-27T08:22:45.676375721Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 27 08:22:45.676439 containerd[1628]: time="2025-10-27T08:22:45.676420134Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 27 08:22:45.676770 containerd[1628]: time="2025-10-27T08:22:45.676670614Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 27 08:22:45.676770 containerd[1628]: time="2025-10-27T08:22:45.676740084Z" level=info msg="metadata content store policy set" policy=shared Oct 27 08:22:45.679651 containerd[1628]: time="2025-10-27T08:22:45.679628159Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 27 08:22:45.679685 containerd[1628]: time="2025-10-27T08:22:45.679666762Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 27 08:22:45.679685 containerd[1628]: time="2025-10-27T08:22:45.679678203Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 27 08:22:45.679727 containerd[1628]: time="2025-10-27T08:22:45.679686749Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 27 08:22:45.679746 containerd[1628]: time="2025-10-27T08:22:45.679727025Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 27 08:22:45.679746 containerd[1628]: time="2025-10-27T08:22:45.679735301Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 27 08:22:45.679746 containerd[1628]: time="2025-10-27T08:22:45.679744478Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 27 08:22:45.679788 containerd[1628]: time="2025-10-27T08:22:45.679753675Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 27 08:22:45.679788 containerd[1628]: time="2025-10-27T08:22:45.679766569Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 27 08:22:45.679788 containerd[1628]: time="2025-10-27T08:22:45.679774183Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 27 08:22:45.679830 containerd[1628]: time="2025-10-27T08:22:45.679798208Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 27 08:22:45.679830 containerd[1628]: time="2025-10-27T08:22:45.679811564Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 27 08:22:45.680281 containerd[1628]: time="2025-10-27T08:22:45.679906071Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 27 08:22:45.680281 containerd[1628]: time="2025-10-27T08:22:45.679949102Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 27 08:22:45.680281 containerd[1628]: time="2025-10-27T08:22:45.679962687Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 27 08:22:45.680281 containerd[1628]: time="2025-10-27T08:22:45.679986782Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 27 08:22:45.680281 containerd[1628]: time="2025-10-27T08:22:45.679998123Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 27 08:22:45.680281 containerd[1628]: time="2025-10-27T08:22:45.680006559Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 27 08:22:45.680281 containerd[1628]: time="2025-10-27T08:22:45.680014494Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 27 08:22:45.680281 containerd[1628]: time="2025-10-27T08:22:45.680037517Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 27 08:22:45.680281 containerd[1628]: time="2025-10-27T08:22:45.680045813Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 27 08:22:45.680281 containerd[1628]: time="2025-10-27T08:22:45.680053847Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 27 08:22:45.680281 containerd[1628]: time="2025-10-27T08:22:45.680061381Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 27 08:22:45.680281 containerd[1628]: time="2025-10-27T08:22:45.680106216Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 27 08:22:45.680281 containerd[1628]: time="2025-10-27T08:22:45.680116175Z" level=info msg="Start snapshots syncer" Oct 27 08:22:45.680281 containerd[1628]: time="2025-10-27T08:22:45.680138186Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 27 08:22:45.680483 containerd[1628]: time="2025-10-27T08:22:45.680353740Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 27 08:22:45.680483 containerd[1628]: time="2025-10-27T08:22:45.680410136Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 27 08:22:45.680570 containerd[1628]: time="2025-10-27T08:22:45.680467182Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 27 08:22:45.680586 containerd[1628]: time="2025-10-27T08:22:45.680571238Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 27 08:22:45.680601 containerd[1628]: time="2025-10-27T08:22:45.680588841Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 27 08:22:45.680601 containerd[1628]: time="2025-10-27T08:22:45.680597367Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 27 08:22:45.680635 containerd[1628]: time="2025-10-27T08:22:45.680613517Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 27 08:22:45.680635 containerd[1628]: time="2025-10-27T08:22:45.680623225Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 27 08:22:45.680635 containerd[1628]: time="2025-10-27T08:22:45.680631170Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 27 08:22:45.680675 containerd[1628]: time="2025-10-27T08:22:45.680644034Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 27 08:22:45.680675 containerd[1628]: time="2025-10-27T08:22:45.680660785Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 27 08:22:45.680675 containerd[1628]: time="2025-10-27T08:22:45.680668531Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 27 08:22:45.680717 containerd[1628]: time="2025-10-27T08:22:45.680676174Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 27 08:22:45.680717 containerd[1628]: time="2025-10-27T08:22:45.680710289Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 27 08:22:45.680745 containerd[1628]: time="2025-10-27T08:22:45.680721630Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 27 08:22:45.680745 containerd[1628]: time="2025-10-27T08:22:45.680728703Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 27 08:22:45.680745 containerd[1628]: time="2025-10-27T08:22:45.680736097Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 27 08:22:45.680745 containerd[1628]: time="2025-10-27T08:22:45.680741557Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 27 08:22:45.680805 containerd[1628]: time="2025-10-27T08:22:45.680748149Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 27 08:22:45.680805 containerd[1628]: time="2025-10-27T08:22:45.680793204Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 27 08:22:45.680832 containerd[1628]: time="2025-10-27T08:22:45.680807741Z" level=info msg="runtime interface created" Oct 27 08:22:45.680832 containerd[1628]: time="2025-10-27T08:22:45.680812149Z" level=info msg="created NRI interface" Oct 27 08:22:45.680832 containerd[1628]: time="2025-10-27T08:22:45.680818091Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 27 08:22:45.680832 containerd[1628]: time="2025-10-27T08:22:45.680826416Z" level=info msg="Connect containerd service" Oct 27 08:22:45.680907 containerd[1628]: time="2025-10-27T08:22:45.680846915Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 27 08:22:45.681542 containerd[1628]: time="2025-10-27T08:22:45.681503396Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 27 08:22:45.788883 containerd[1628]: time="2025-10-27T08:22:45.785527301Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 27 08:22:45.788883 containerd[1628]: time="2025-10-27T08:22:45.785581392Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 27 08:22:45.788883 containerd[1628]: time="2025-10-27T08:22:45.785610157Z" level=info msg="Start subscribing containerd event" Oct 27 08:22:45.788883 containerd[1628]: time="2025-10-27T08:22:45.785632709Z" level=info msg="Start recovering state" Oct 27 08:22:45.788883 containerd[1628]: time="2025-10-27T08:22:45.785705656Z" level=info msg="Start event monitor" Oct 27 08:22:45.788883 containerd[1628]: time="2025-10-27T08:22:45.785716756Z" level=info msg="Start cni network conf syncer for default" Oct 27 08:22:45.788883 containerd[1628]: time="2025-10-27T08:22:45.785722737Z" level=info msg="Start streaming server" Oct 27 08:22:45.788883 containerd[1628]: time="2025-10-27T08:22:45.785732155Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 27 08:22:45.788883 containerd[1628]: time="2025-10-27T08:22:45.785737766Z" level=info msg="runtime interface starting up..." Oct 27 08:22:45.788883 containerd[1628]: time="2025-10-27T08:22:45.785742204Z" level=info msg="starting plugins..." Oct 27 08:22:45.788883 containerd[1628]: time="2025-10-27T08:22:45.785753365Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 27 08:22:45.786038 systemd[1]: Started containerd.service - containerd container runtime. Oct 27 08:22:45.790610 containerd[1628]: time="2025-10-27T08:22:45.790590906Z" level=info msg="containerd successfully booted in 0.128289s" Oct 27 08:22:45.826831 tar[1611]: linux-amd64/README.md Oct 27 08:22:45.844030 systemd-networkd[1521]: eth0: Gained IPv6LL Oct 27 08:22:45.845441 systemd-timesyncd[1522]: Network configuration changed, trying to establish connection. Oct 27 08:22:45.846425 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 27 08:22:45.848586 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 27 08:22:45.851224 systemd[1]: Reached target network-online.target - Network is Online. Oct 27 08:22:45.859021 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:22:45.862091 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 27 08:22:45.889843 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 27 08:22:46.548242 systemd-networkd[1521]: eth1: Gained IPv6LL Oct 27 08:22:46.549068 systemd-timesyncd[1522]: Network configuration changed, trying to establish connection. Oct 27 08:22:46.634380 sshd[1694]: Accepted publickey for core from 147.75.109.163 port 38594 ssh2: RSA SHA256:VBzT7lRU7iKzR07sl+BRHKYnd7nyLYgikPwEjDMWwKQ Oct 27 08:22:46.635566 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:22:46.642430 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 27 08:22:46.645324 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 27 08:22:46.654869 systemd-logind[1593]: New session 1 of user core. Oct 27 08:22:46.664328 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 27 08:22:46.669248 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 27 08:22:46.684298 (systemd)[1739]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 27 08:22:46.687638 systemd-logind[1593]: New session c1 of user core. Oct 27 08:22:46.734860 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:22:46.735644 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 27 08:22:46.744168 (kubelet)[1750]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 08:22:46.806207 systemd[1739]: Queued start job for default target default.target. Oct 27 08:22:46.812658 systemd[1739]: Created slice app.slice - User Application Slice. Oct 27 08:22:46.813068 systemd[1739]: Reached target paths.target - Paths. Oct 27 08:22:46.813130 systemd[1739]: Reached target timers.target - Timers. Oct 27 08:22:46.814198 systemd[1739]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 27 08:22:46.824700 systemd[1739]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 27 08:22:46.824787 systemd[1739]: Reached target sockets.target - Sockets. Oct 27 08:22:46.824827 systemd[1739]: Reached target basic.target - Basic System. Oct 27 08:22:46.824855 systemd[1739]: Reached target default.target - Main User Target. Oct 27 08:22:46.824876 systemd[1739]: Startup finished in 129ms. Oct 27 08:22:46.825047 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 27 08:22:46.831050 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 27 08:22:46.832206 systemd[1]: Startup finished in 2.831s (kernel) + 5.496s (initrd) + 4.431s (userspace) = 12.760s. Oct 27 08:22:47.220857 kubelet[1750]: E1027 08:22:47.220482 1750 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 08:22:47.223493 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 08:22:47.223723 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 08:22:47.224314 systemd[1]: kubelet.service: Consumed 836ms CPU time, 259.8M memory peak. Oct 27 08:22:47.581172 systemd[1]: Started sshd@1-46.62.164.160:22-147.75.109.163:38608.service - OpenSSH per-connection server daemon (147.75.109.163:38608). Oct 27 08:22:48.700712 sshd[1766]: Accepted publickey for core from 147.75.109.163 port 38608 ssh2: RSA SHA256:VBzT7lRU7iKzR07sl+BRHKYnd7nyLYgikPwEjDMWwKQ Oct 27 08:22:48.702196 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:22:48.708377 systemd-logind[1593]: New session 2 of user core. Oct 27 08:22:48.716254 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 27 08:22:49.471817 sshd[1769]: Connection closed by 147.75.109.163 port 38608 Oct 27 08:22:49.472409 sshd-session[1766]: pam_unix(sshd:session): session closed for user core Oct 27 08:22:49.476307 systemd[1]: sshd@1-46.62.164.160:22-147.75.109.163:38608.service: Deactivated successfully. Oct 27 08:22:49.477788 systemd[1]: session-2.scope: Deactivated successfully. Oct 27 08:22:49.478650 systemd-logind[1593]: Session 2 logged out. Waiting for processes to exit. Oct 27 08:22:49.479849 systemd-logind[1593]: Removed session 2. Oct 27 08:22:49.660406 systemd[1]: Started sshd@2-46.62.164.160:22-147.75.109.163:50152.service - OpenSSH per-connection server daemon (147.75.109.163:50152). Oct 27 08:22:50.768523 sshd[1775]: Accepted publickey for core from 147.75.109.163 port 50152 ssh2: RSA SHA256:VBzT7lRU7iKzR07sl+BRHKYnd7nyLYgikPwEjDMWwKQ Oct 27 08:22:50.770704 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:22:50.778820 systemd-logind[1593]: New session 3 of user core. Oct 27 08:22:50.787248 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 27 08:22:51.526358 sshd[1778]: Connection closed by 147.75.109.163 port 50152 Oct 27 08:22:51.527168 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Oct 27 08:22:51.533479 systemd-logind[1593]: Session 3 logged out. Waiting for processes to exit. Oct 27 08:22:51.533693 systemd[1]: sshd@2-46.62.164.160:22-147.75.109.163:50152.service: Deactivated successfully. Oct 27 08:22:51.536625 systemd[1]: session-3.scope: Deactivated successfully. Oct 27 08:22:51.538813 systemd-logind[1593]: Removed session 3. Oct 27 08:22:51.731268 systemd[1]: Started sshd@3-46.62.164.160:22-147.75.109.163:50156.service - OpenSSH per-connection server daemon (147.75.109.163:50156). Oct 27 08:22:52.847377 sshd[1784]: Accepted publickey for core from 147.75.109.163 port 50156 ssh2: RSA SHA256:VBzT7lRU7iKzR07sl+BRHKYnd7nyLYgikPwEjDMWwKQ Oct 27 08:22:52.848682 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:22:52.854422 systemd-logind[1593]: New session 4 of user core. Oct 27 08:22:52.860064 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 27 08:22:53.609884 sshd[1787]: Connection closed by 147.75.109.163 port 50156 Oct 27 08:22:53.610429 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Oct 27 08:22:53.614577 systemd-logind[1593]: Session 4 logged out. Waiting for processes to exit. Oct 27 08:22:53.614695 systemd[1]: sshd@3-46.62.164.160:22-147.75.109.163:50156.service: Deactivated successfully. Oct 27 08:22:53.616308 systemd[1]: session-4.scope: Deactivated successfully. Oct 27 08:22:53.617778 systemd-logind[1593]: Removed session 4. Oct 27 08:22:53.772014 systemd[1]: Started sshd@4-46.62.164.160:22-147.75.109.163:50158.service - OpenSSH per-connection server daemon (147.75.109.163:50158). Oct 27 08:22:54.791957 sshd[1793]: Accepted publickey for core from 147.75.109.163 port 50158 ssh2: RSA SHA256:VBzT7lRU7iKzR07sl+BRHKYnd7nyLYgikPwEjDMWwKQ Oct 27 08:22:54.793335 sshd-session[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:22:54.798114 systemd-logind[1593]: New session 5 of user core. Oct 27 08:22:54.807090 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 27 08:22:55.350624 sudo[1797]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 27 08:22:55.351247 sudo[1797]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 08:22:55.370766 sudo[1797]: pam_unix(sudo:session): session closed for user root Oct 27 08:22:55.535246 sshd[1796]: Connection closed by 147.75.109.163 port 50158 Oct 27 08:22:55.536673 sshd-session[1793]: pam_unix(sshd:session): session closed for user core Oct 27 08:22:55.543787 systemd[1]: sshd@4-46.62.164.160:22-147.75.109.163:50158.service: Deactivated successfully. Oct 27 08:22:55.546268 systemd[1]: session-5.scope: Deactivated successfully. Oct 27 08:22:55.549220 systemd-logind[1593]: Session 5 logged out. Waiting for processes to exit. Oct 27 08:22:55.551082 systemd-logind[1593]: Removed session 5. Oct 27 08:22:55.717330 systemd[1]: Started sshd@5-46.62.164.160:22-147.75.109.163:50160.service - OpenSSH per-connection server daemon (147.75.109.163:50160). Oct 27 08:22:56.727381 sshd[1803]: Accepted publickey for core from 147.75.109.163 port 50160 ssh2: RSA SHA256:VBzT7lRU7iKzR07sl+BRHKYnd7nyLYgikPwEjDMWwKQ Oct 27 08:22:56.728677 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:22:56.733855 systemd-logind[1593]: New session 6 of user core. Oct 27 08:22:56.744115 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 27 08:22:57.259503 sudo[1808]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 27 08:22:57.259732 sudo[1808]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 08:22:57.260719 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 27 08:22:57.263068 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:22:57.266676 sudo[1808]: pam_unix(sudo:session): session closed for user root Oct 27 08:22:57.272461 sudo[1807]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 27 08:22:57.272693 sudo[1807]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 08:22:57.291238 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 27 08:22:57.322357 augenrules[1833]: No rules Oct 27 08:22:57.324343 systemd[1]: audit-rules.service: Deactivated successfully. Oct 27 08:22:57.326265 sudo[1807]: pam_unix(sudo:session): session closed for user root Oct 27 08:22:57.324531 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 27 08:22:57.380400 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:22:57.386076 (kubelet)[1843]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 08:22:57.415552 kubelet[1843]: E1027 08:22:57.415503 1843 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 08:22:57.418698 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 08:22:57.418826 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 08:22:57.419175 systemd[1]: kubelet.service: Consumed 121ms CPU time, 110M memory peak. Oct 27 08:22:57.489245 sshd[1806]: Connection closed by 147.75.109.163 port 50160 Oct 27 08:22:57.490191 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Oct 27 08:22:57.495545 systemd[1]: sshd@5-46.62.164.160:22-147.75.109.163:50160.service: Deactivated successfully. Oct 27 08:22:57.498196 systemd[1]: session-6.scope: Deactivated successfully. Oct 27 08:22:57.500716 systemd-logind[1593]: Session 6 logged out. Waiting for processes to exit. Oct 27 08:22:57.503017 systemd-logind[1593]: Removed session 6. Oct 27 08:22:57.708483 systemd[1]: Started sshd@6-46.62.164.160:22-147.75.109.163:50172.service - OpenSSH per-connection server daemon (147.75.109.163:50172). Oct 27 08:22:58.839378 sshd[1855]: Accepted publickey for core from 147.75.109.163 port 50172 ssh2: RSA SHA256:VBzT7lRU7iKzR07sl+BRHKYnd7nyLYgikPwEjDMWwKQ Oct 27 08:22:58.840672 sshd-session[1855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:22:58.845975 systemd-logind[1593]: New session 7 of user core. Oct 27 08:22:58.855149 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 27 08:22:59.427122 sudo[1859]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 27 08:22:59.427481 sudo[1859]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 08:22:59.813314 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 27 08:22:59.823199 (dockerd)[1876]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 27 08:23:00.113165 dockerd[1876]: time="2025-10-27T08:23:00.112759761Z" level=info msg="Starting up" Oct 27 08:23:00.114026 dockerd[1876]: time="2025-10-27T08:23:00.114001289Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 27 08:23:00.123823 dockerd[1876]: time="2025-10-27T08:23:00.123764005Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 27 08:23:00.138862 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3370196538-merged.mount: Deactivated successfully. Oct 27 08:23:00.174879 dockerd[1876]: time="2025-10-27T08:23:00.174744699Z" level=info msg="Loading containers: start." Oct 27 08:23:00.186945 kernel: Initializing XFRM netlink socket Oct 27 08:23:00.351039 systemd-timesyncd[1522]: Network configuration changed, trying to establish connection. Oct 27 08:23:00.387463 systemd-networkd[1521]: docker0: Link UP Oct 27 08:23:00.391363 dockerd[1876]: time="2025-10-27T08:23:00.391322484Z" level=info msg="Loading containers: done." Oct 27 08:23:00.402431 dockerd[1876]: time="2025-10-27T08:23:00.402387031Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 27 08:23:00.402528 dockerd[1876]: time="2025-10-27T08:23:00.402454608Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 27 08:23:00.402528 dockerd[1876]: time="2025-10-27T08:23:00.402518087Z" level=info msg="Initializing buildkit" Oct 27 08:23:00.421398 dockerd[1876]: time="2025-10-27T08:23:00.421365017Z" level=info msg="Completed buildkit initialization" Oct 27 08:23:00.428601 dockerd[1876]: time="2025-10-27T08:23:00.428573404Z" level=info msg="Daemon has completed initialization" Oct 27 08:23:00.428871 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 27 08:23:00.429053 dockerd[1876]: time="2025-10-27T08:23:00.428719067Z" level=info msg="API listen on /run/docker.sock" Oct 27 08:23:01.811791 systemd-resolved[1301]: Clock change detected. Flushing caches. Oct 27 08:23:01.813326 systemd-timesyncd[1522]: Contacted time server 62.108.36.235:123 (2.flatcar.pool.ntp.org). Oct 27 08:23:01.813380 systemd-timesyncd[1522]: Initial clock synchronization to Mon 2025-10-27 08:23:01.811690 UTC. Oct 27 08:23:02.475353 containerd[1628]: time="2025-10-27T08:23:02.475018594Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 27 08:23:03.000464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3632349581.mount: Deactivated successfully. Oct 27 08:23:03.894805 containerd[1628]: time="2025-10-27T08:23:03.894006932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:23:03.895170 containerd[1628]: time="2025-10-27T08:23:03.895123345Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065492" Oct 27 08:23:03.896388 containerd[1628]: time="2025-10-27T08:23:03.896106028Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:23:03.898288 containerd[1628]: time="2025-10-27T08:23:03.898245670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:23:03.899049 containerd[1628]: time="2025-10-27T08:23:03.899014592Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 1.423953969s" Oct 27 08:23:03.899132 containerd[1628]: time="2025-10-27T08:23:03.899118927Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Oct 27 08:23:03.900000 containerd[1628]: time="2025-10-27T08:23:03.899973359Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 27 08:23:04.981915 containerd[1628]: time="2025-10-27T08:23:04.981825212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:23:04.982994 containerd[1628]: time="2025-10-27T08:23:04.982795701Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159779" Oct 27 08:23:04.983829 containerd[1628]: time="2025-10-27T08:23:04.983802930Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:23:04.985801 containerd[1628]: time="2025-10-27T08:23:04.985774948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:23:04.986604 containerd[1628]: time="2025-10-27T08:23:04.986577703Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.086573245s" Oct 27 08:23:04.986676 containerd[1628]: time="2025-10-27T08:23:04.986664406Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Oct 27 08:23:04.987057 containerd[1628]: time="2025-10-27T08:23:04.987041193Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 27 08:23:05.840693 containerd[1628]: time="2025-10-27T08:23:05.840613097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:23:05.841764 containerd[1628]: time="2025-10-27T08:23:05.841403649Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725115" Oct 27 08:23:05.842731 containerd[1628]: time="2025-10-27T08:23:05.842689119Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:23:05.845835 containerd[1628]: time="2025-10-27T08:23:05.845798169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:23:05.846897 containerd[1628]: time="2025-10-27T08:23:05.846868926Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 859.801896ms" Oct 27 08:23:05.846985 containerd[1628]: time="2025-10-27T08:23:05.846965628Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Oct 27 08:23:05.847928 containerd[1628]: time="2025-10-27T08:23:05.847884050Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 27 08:23:06.792369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount911575861.mount: Deactivated successfully. Oct 27 08:23:07.016982 containerd[1628]: time="2025-10-27T08:23:07.016921845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:23:07.018118 containerd[1628]: time="2025-10-27T08:23:07.017968508Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964727" Oct 27 08:23:07.018978 containerd[1628]: time="2025-10-27T08:23:07.018955278Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:23:07.020869 containerd[1628]: time="2025-10-27T08:23:07.020844731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:23:07.021317 containerd[1628]: time="2025-10-27T08:23:07.021290637Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.173370499s" Oct 27 08:23:07.021397 containerd[1628]: time="2025-10-27T08:23:07.021379043Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Oct 27 08:23:07.021987 containerd[1628]: time="2025-10-27T08:23:07.021963559Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 27 08:23:07.506577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3972873371.mount: Deactivated successfully. Oct 27 08:23:08.401734 containerd[1628]: time="2025-10-27T08:23:08.401679707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:23:08.402832 containerd[1628]: time="2025-10-27T08:23:08.402576549Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388101" Oct 27 08:23:08.403763 containerd[1628]: time="2025-10-27T08:23:08.403733639Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:23:08.406049 containerd[1628]: time="2025-10-27T08:23:08.406029834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:23:08.406834 containerd[1628]: time="2025-10-27T08:23:08.406813935Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.384821872s" Oct 27 08:23:08.406915 containerd[1628]: time="2025-10-27T08:23:08.406902671Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Oct 27 08:23:08.407395 containerd[1628]: time="2025-10-27T08:23:08.407359167Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 27 08:23:08.844165 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 27 08:23:08.846427 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:23:08.859109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3224697302.mount: Deactivated successfully. Oct 27 08:23:08.867664 containerd[1628]: time="2025-10-27T08:23:08.867633999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:23:08.868872 containerd[1628]: time="2025-10-27T08:23:08.868852714Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321240" Oct 27 08:23:08.869825 containerd[1628]: time="2025-10-27T08:23:08.869750888Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:23:08.872478 containerd[1628]: time="2025-10-27T08:23:08.872402009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:23:08.873138 containerd[1628]: time="2025-10-27T08:23:08.873112582Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 465.721675ms" Oct 27 08:23:08.873231 containerd[1628]: time="2025-10-27T08:23:08.873216957Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Oct 27 08:23:08.874572 containerd[1628]: time="2025-10-27T08:23:08.874521353Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 27 08:23:08.983535 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:23:08.989657 (kubelet)[2224]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 08:23:09.026792 kubelet[2224]: E1027 08:23:09.026697 2224 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 08:23:09.029321 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 08:23:09.029463 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 08:23:09.029836 systemd[1]: kubelet.service: Consumed 147ms CPU time, 110.3M memory peak. Oct 27 08:23:10.965049 containerd[1628]: time="2025-10-27T08:23:10.964994483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:23:10.966303 containerd[1628]: time="2025-10-27T08:23:10.966064009Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514639" Oct 27 08:23:10.967180 containerd[1628]: time="2025-10-27T08:23:10.967142561Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:23:10.969704 containerd[1628]: time="2025-10-27T08:23:10.969680840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:23:10.970739 containerd[1628]: time="2025-10-27T08:23:10.970702928Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.096133864s" Oct 27 08:23:10.970811 containerd[1628]: time="2025-10-27T08:23:10.970746940Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Oct 27 08:23:13.780798 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:23:13.781341 systemd[1]: kubelet.service: Consumed 147ms CPU time, 110.3M memory peak. Oct 27 08:23:13.783382 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:23:13.819623 systemd[1]: Reload requested from client PID 2299 ('systemctl') (unit session-7.scope)... Oct 27 08:23:13.819773 systemd[1]: Reloading... Oct 27 08:23:13.898223 zram_generator::config[2344]: No configuration found. Oct 27 08:23:14.080617 systemd[1]: Reloading finished in 260 ms. Oct 27 08:23:14.116840 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 27 08:23:14.116907 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 27 08:23:14.117093 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:23:14.117137 systemd[1]: kubelet.service: Consumed 75ms CPU time, 97.7M memory peak. Oct 27 08:23:14.118243 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:23:14.223356 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:23:14.231640 (kubelet)[2398]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 27 08:23:14.274096 kubelet[2398]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 27 08:23:14.274743 kubelet[2398]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 08:23:14.274743 kubelet[2398]: I1027 08:23:14.274500 2398 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 27 08:23:14.852316 kubelet[2398]: I1027 08:23:14.852276 2398 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 27 08:23:14.852316 kubelet[2398]: I1027 08:23:14.852297 2398 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 27 08:23:14.852316 kubelet[2398]: I1027 08:23:14.852314 2398 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 27 08:23:14.852316 kubelet[2398]: I1027 08:23:14.852319 2398 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 27 08:23:14.852622 kubelet[2398]: I1027 08:23:14.852548 2398 server.go:956] "Client rotation is on, will bootstrap in background" Oct 27 08:23:14.864681 kubelet[2398]: I1027 08:23:14.863768 2398 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 27 08:23:14.869348 kubelet[2398]: E1027 08:23:14.869323 2398 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://46.62.164.160:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 46.62.164.160:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 27 08:23:14.874097 kubelet[2398]: I1027 08:23:14.874073 2398 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 27 08:23:14.879219 kubelet[2398]: I1027 08:23:14.879184 2398 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 27 08:23:14.882969 kubelet[2398]: I1027 08:23:14.882929 2398 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 27 08:23:14.884203 kubelet[2398]: I1027 08:23:14.882956 2398 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-9999-9-9-k-f136f833c6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 27 08:23:14.884203 kubelet[2398]: I1027 08:23:14.884196 2398 topology_manager.go:138] "Creating topology manager with none policy" Oct 27 08:23:14.884203 kubelet[2398]: I1027 08:23:14.884205 2398 container_manager_linux.go:306] "Creating device plugin manager" Oct 27 08:23:14.884385 kubelet[2398]: I1027 08:23:14.884272 2398 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 27 08:23:14.886169 kubelet[2398]: I1027 08:23:14.886140 2398 state_mem.go:36] "Initialized new in-memory state store" Oct 27 08:23:14.886336 kubelet[2398]: I1027 08:23:14.886294 2398 kubelet.go:475] "Attempting to sync node with API server" Oct 27 08:23:14.886336 kubelet[2398]: I1027 08:23:14.886326 2398 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 27 08:23:14.886756 kubelet[2398]: E1027 08:23:14.886714 2398 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://46.62.164.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-9999-9-9-k-f136f833c6&limit=500&resourceVersion=0\": dial tcp 46.62.164.160:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 27 08:23:14.887572 kubelet[2398]: I1027 08:23:14.887549 2398 kubelet.go:387] "Adding apiserver pod source" Oct 27 08:23:14.887572 kubelet[2398]: I1027 08:23:14.887573 2398 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 27 08:23:14.891779 kubelet[2398]: E1027 08:23:14.891653 2398 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://46.62.164.160:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.62.164.160:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 27 08:23:14.892046 kubelet[2398]: I1027 08:23:14.892025 2398 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 27 08:23:14.895207 kubelet[2398]: I1027 08:23:14.895181 2398 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 27 08:23:14.895251 kubelet[2398]: I1027 08:23:14.895211 2398 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 27 08:23:14.897411 kubelet[2398]: W1027 08:23:14.897378 2398 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 27 08:23:14.902295 kubelet[2398]: I1027 08:23:14.902254 2398 server.go:1262] "Started kubelet" Oct 27 08:23:14.904469 kubelet[2398]: I1027 08:23:14.904440 2398 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 27 08:23:14.908673 kubelet[2398]: E1027 08:23:14.906007 2398 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://46.62.164.160:6443/api/v1/namespaces/default/events\": dial tcp 46.62.164.160:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-9999-9-9-k-f136f833c6.18724b7ac407b175 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-9999-9-9-k-f136f833c6,UID:ci-9999-9-9-k-f136f833c6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-9999-9-9-k-f136f833c6,},FirstTimestamp:2025-10-27 08:23:14.902217077 +0000 UTC m=+0.667249482,LastTimestamp:2025-10-27 08:23:14.902217077 +0000 UTC m=+0.667249482,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-9999-9-9-k-f136f833c6,}" Oct 27 08:23:14.912715 kubelet[2398]: I1027 08:23:14.910366 2398 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 27 08:23:14.920376 kubelet[2398]: I1027 08:23:14.920342 2398 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 27 08:23:14.921802 kubelet[2398]: I1027 08:23:14.921780 2398 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 27 08:23:14.921973 kubelet[2398]: E1027 08:23:14.921942 2398 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-9999-9-9-k-f136f833c6\" not found" Oct 27 08:23:14.923885 kubelet[2398]: I1027 08:23:14.923871 2398 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 27 08:23:14.924046 kubelet[2398]: I1027 08:23:14.924035 2398 reconciler.go:29] "Reconciler: start to sync state" Oct 27 08:23:14.924623 kubelet[2398]: I1027 08:23:14.924609 2398 server.go:310] "Adding debug handlers to kubelet server" Oct 27 08:23:14.927324 kubelet[2398]: I1027 08:23:14.927296 2398 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 27 08:23:14.927429 kubelet[2398]: I1027 08:23:14.927415 2398 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 27 08:23:14.927638 kubelet[2398]: I1027 08:23:14.927625 2398 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 27 08:23:14.930158 kubelet[2398]: E1027 08:23:14.929577 2398 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://46.62.164.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 46.62.164.160:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 27 08:23:14.930158 kubelet[2398]: E1027 08:23:14.929669 2398 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.164.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-9999-9-9-k-f136f833c6?timeout=10s\": dial tcp 46.62.164.160:6443: connect: connection refused" interval="200ms" Oct 27 08:23:14.931786 kubelet[2398]: I1027 08:23:14.930586 2398 factory.go:223] Registration of the systemd container factory successfully Oct 27 08:23:14.931786 kubelet[2398]: I1027 08:23:14.930656 2398 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 27 08:23:14.931976 kubelet[2398]: I1027 08:23:14.931948 2398 factory.go:223] Registration of the containerd container factory successfully Oct 27 08:23:14.935651 kubelet[2398]: I1027 08:23:14.935574 2398 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 27 08:23:14.937520 kubelet[2398]: I1027 08:23:14.937499 2398 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 27 08:23:14.937520 kubelet[2398]: I1027 08:23:14.937520 2398 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 27 08:23:14.937585 kubelet[2398]: I1027 08:23:14.937539 2398 kubelet.go:2427] "Starting kubelet main sync loop" Oct 27 08:23:14.937585 kubelet[2398]: E1027 08:23:14.937570 2398 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 27 08:23:14.944206 kubelet[2398]: E1027 08:23:14.944184 2398 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 27 08:23:14.944378 kubelet[2398]: E1027 08:23:14.944352 2398 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://46.62.164.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 46.62.164.160:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 27 08:23:14.957480 kubelet[2398]: I1027 08:23:14.957468 2398 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 27 08:23:14.957730 kubelet[2398]: I1027 08:23:14.957554 2398 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 27 08:23:14.957730 kubelet[2398]: I1027 08:23:14.957567 2398 state_mem.go:36] "Initialized new in-memory state store" Oct 27 08:23:14.959086 kubelet[2398]: I1027 08:23:14.959058 2398 policy_none.go:49] "None policy: Start" Oct 27 08:23:14.959086 kubelet[2398]: I1027 08:23:14.959082 2398 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 27 08:23:14.959425 kubelet[2398]: I1027 08:23:14.959359 2398 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 27 08:23:14.960520 kubelet[2398]: I1027 08:23:14.960482 2398 policy_none.go:47] "Start" Oct 27 08:23:14.965062 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 27 08:23:14.977046 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 27 08:23:14.980472 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 27 08:23:14.988215 kubelet[2398]: E1027 08:23:14.988176 2398 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 27 08:23:14.988344 kubelet[2398]: I1027 08:23:14.988322 2398 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 27 08:23:14.988960 kubelet[2398]: I1027 08:23:14.988342 2398 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 27 08:23:14.988960 kubelet[2398]: I1027 08:23:14.988801 2398 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 27 08:23:14.989740 kubelet[2398]: E1027 08:23:14.989716 2398 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 27 08:23:14.989792 kubelet[2398]: E1027 08:23:14.989751 2398 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-9999-9-9-k-f136f833c6\" not found" Oct 27 08:23:15.050744 systemd[1]: Created slice kubepods-burstable-poda05b68404c8d7a13bb85317e1e3deb73.slice - libcontainer container kubepods-burstable-poda05b68404c8d7a13bb85317e1e3deb73.slice. Oct 27 08:23:15.059223 kubelet[2398]: E1027 08:23:15.059042 2398 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-9-k-f136f833c6\" not found" node="ci-9999-9-9-k-f136f833c6" Oct 27 08:23:15.061781 systemd[1]: Created slice kubepods-burstable-pod8edbedca1c8a0dd93a87b5a4eaf05b5d.slice - libcontainer container kubepods-burstable-pod8edbedca1c8a0dd93a87b5a4eaf05b5d.slice. Oct 27 08:23:15.073712 kubelet[2398]: E1027 08:23:15.073684 2398 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-9-k-f136f833c6\" not found" node="ci-9999-9-9-k-f136f833c6" Oct 27 08:23:15.076360 systemd[1]: Created slice kubepods-burstable-podf4cdd35661db4b76d371dbbffd98b4fa.slice - libcontainer container kubepods-burstable-podf4cdd35661db4b76d371dbbffd98b4fa.slice. Oct 27 08:23:15.078280 kubelet[2398]: E1027 08:23:15.078257 2398 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-9-k-f136f833c6\" not found" node="ci-9999-9-9-k-f136f833c6" Oct 27 08:23:15.091967 kubelet[2398]: I1027 08:23:15.091919 2398 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999-9-9-k-f136f833c6" Oct 27 08:23:15.092337 kubelet[2398]: E1027 08:23:15.092279 2398 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.164.160:6443/api/v1/nodes\": dial tcp 46.62.164.160:6443: connect: connection refused" node="ci-9999-9-9-k-f136f833c6" Oct 27 08:23:15.125899 kubelet[2398]: I1027 08:23:15.125768 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a05b68404c8d7a13bb85317e1e3deb73-flexvolume-dir\") pod \"kube-controller-manager-ci-9999-9-9-k-f136f833c6\" (UID: \"a05b68404c8d7a13bb85317e1e3deb73\") " pod="kube-system/kube-controller-manager-ci-9999-9-9-k-f136f833c6" Oct 27 08:23:15.125899 kubelet[2398]: I1027 08:23:15.125807 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a05b68404c8d7a13bb85317e1e3deb73-k8s-certs\") pod \"kube-controller-manager-ci-9999-9-9-k-f136f833c6\" (UID: \"a05b68404c8d7a13bb85317e1e3deb73\") " pod="kube-system/kube-controller-manager-ci-9999-9-9-k-f136f833c6" Oct 27 08:23:15.125899 kubelet[2398]: I1027 08:23:15.125832 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a05b68404c8d7a13bb85317e1e3deb73-kubeconfig\") pod \"kube-controller-manager-ci-9999-9-9-k-f136f833c6\" (UID: \"a05b68404c8d7a13bb85317e1e3deb73\") " pod="kube-system/kube-controller-manager-ci-9999-9-9-k-f136f833c6" Oct 27 08:23:15.125899 kubelet[2398]: I1027 08:23:15.125851 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f4cdd35661db4b76d371dbbffd98b4fa-kubeconfig\") pod \"kube-scheduler-ci-9999-9-9-k-f136f833c6\" (UID: \"f4cdd35661db4b76d371dbbffd98b4fa\") " pod="kube-system/kube-scheduler-ci-9999-9-9-k-f136f833c6" Oct 27 08:23:15.125899 kubelet[2398]: I1027 08:23:15.125867 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8edbedca1c8a0dd93a87b5a4eaf05b5d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-9999-9-9-k-f136f833c6\" (UID: \"8edbedca1c8a0dd93a87b5a4eaf05b5d\") " pod="kube-system/kube-apiserver-ci-9999-9-9-k-f136f833c6" Oct 27 08:23:15.126099 kubelet[2398]: I1027 08:23:15.125884 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a05b68404c8d7a13bb85317e1e3deb73-ca-certs\") pod \"kube-controller-manager-ci-9999-9-9-k-f136f833c6\" (UID: \"a05b68404c8d7a13bb85317e1e3deb73\") " pod="kube-system/kube-controller-manager-ci-9999-9-9-k-f136f833c6" Oct 27 08:23:15.126099 kubelet[2398]: I1027 08:23:15.125899 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a05b68404c8d7a13bb85317e1e3deb73-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-9999-9-9-k-f136f833c6\" (UID: \"a05b68404c8d7a13bb85317e1e3deb73\") " pod="kube-system/kube-controller-manager-ci-9999-9-9-k-f136f833c6" Oct 27 08:23:15.126099 kubelet[2398]: I1027 08:23:15.125916 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8edbedca1c8a0dd93a87b5a4eaf05b5d-ca-certs\") pod \"kube-apiserver-ci-9999-9-9-k-f136f833c6\" (UID: \"8edbedca1c8a0dd93a87b5a4eaf05b5d\") " pod="kube-system/kube-apiserver-ci-9999-9-9-k-f136f833c6" Oct 27 08:23:15.126099 kubelet[2398]: I1027 08:23:15.125930 2398 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8edbedca1c8a0dd93a87b5a4eaf05b5d-k8s-certs\") pod \"kube-apiserver-ci-9999-9-9-k-f136f833c6\" (UID: \"8edbedca1c8a0dd93a87b5a4eaf05b5d\") " pod="kube-system/kube-apiserver-ci-9999-9-9-k-f136f833c6" Oct 27 08:23:15.130267 kubelet[2398]: E1027 08:23:15.130243 2398 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.164.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-9999-9-9-k-f136f833c6?timeout=10s\": dial tcp 46.62.164.160:6443: connect: connection refused" interval="400ms" Oct 27 08:23:15.294346 kubelet[2398]: I1027 08:23:15.294305 2398 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999-9-9-k-f136f833c6" Oct 27 08:23:15.294940 kubelet[2398]: E1027 08:23:15.294764 2398 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.164.160:6443/api/v1/nodes\": dial tcp 46.62.164.160:6443: connect: connection refused" node="ci-9999-9-9-k-f136f833c6" Oct 27 08:23:15.362040 containerd[1628]: time="2025-10-27T08:23:15.361977184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-9999-9-9-k-f136f833c6,Uid:a05b68404c8d7a13bb85317e1e3deb73,Namespace:kube-system,Attempt:0,}" Oct 27 08:23:15.381440 containerd[1628]: time="2025-10-27T08:23:15.381219645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-9999-9-9-k-f136f833c6,Uid:8edbedca1c8a0dd93a87b5a4eaf05b5d,Namespace:kube-system,Attempt:0,}" Oct 27 08:23:15.381605 containerd[1628]: time="2025-10-27T08:23:15.381226278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-9999-9-9-k-f136f833c6,Uid:f4cdd35661db4b76d371dbbffd98b4fa,Namespace:kube-system,Attempt:0,}" Oct 27 08:23:15.531264 kubelet[2398]: E1027 08:23:15.531179 2398 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.164.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-9999-9-9-k-f136f833c6?timeout=10s\": dial tcp 46.62.164.160:6443: connect: connection refused" interval="800ms" Oct 27 08:23:15.696833 kubelet[2398]: I1027 08:23:15.696694 2398 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999-9-9-k-f136f833c6" Oct 27 08:23:15.697190 kubelet[2398]: E1027 08:23:15.697067 2398 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.164.160:6443/api/v1/nodes\": dial tcp 46.62.164.160:6443: connect: connection refused" node="ci-9999-9-9-k-f136f833c6" Oct 27 08:23:15.749264 kubelet[2398]: E1027 08:23:15.749211 2398 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://46.62.164.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-9999-9-9-k-f136f833c6&limit=500&resourceVersion=0\": dial tcp 46.62.164.160:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 27 08:23:15.783049 kubelet[2398]: E1027 08:23:15.782996 2398 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://46.62.164.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 46.62.164.160:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 27 08:23:15.811544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount545209880.mount: Deactivated successfully. Oct 27 08:23:15.817523 containerd[1628]: time="2025-10-27T08:23:15.817477604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 08:23:15.819166 containerd[1628]: time="2025-10-27T08:23:15.819127778Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 08:23:15.823716 containerd[1628]: time="2025-10-27T08:23:15.823646921Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Oct 27 08:23:15.824466 containerd[1628]: time="2025-10-27T08:23:15.824427064Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 27 08:23:15.826508 containerd[1628]: time="2025-10-27T08:23:15.825913512Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 08:23:15.827492 containerd[1628]: time="2025-10-27T08:23:15.827435705Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 08:23:15.827798 containerd[1628]: time="2025-10-27T08:23:15.827778188Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 27 08:23:15.829705 containerd[1628]: time="2025-10-27T08:23:15.829682949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 08:23:15.830185 containerd[1628]: time="2025-10-27T08:23:15.830158080Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 441.410209ms" Oct 27 08:23:15.831300 containerd[1628]: time="2025-10-27T08:23:15.831264946Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 442.507367ms" Oct 27 08:23:15.833056 containerd[1628]: time="2025-10-27T08:23:15.833016209Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 465.413538ms" Oct 27 08:23:15.860613 kubelet[2398]: E1027 08:23:15.860505 2398 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://46.62.164.160:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.62.164.160:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 27 08:23:15.951908 containerd[1628]: time="2025-10-27T08:23:15.951592964Z" level=info msg="connecting to shim e8e82b981cb185b0817d3a7183caaf694e5ff20a9652b2143f2e4fce86ecd211" address="unix:///run/containerd/s/c0d53ad9060f82447263bcd15514d9543f802979296e7589f4964e87ef6b898d" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:23:15.953518 containerd[1628]: time="2025-10-27T08:23:15.953423385Z" level=info msg="connecting to shim fdf064aabf4179420f4374583a11f5b43eae18acc24b469bfdc2ae297f655039" address="unix:///run/containerd/s/15c08abf432f376634bf1b141c909b2babdee3618c83eb7ef2281b72764b924d" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:23:15.955802 containerd[1628]: time="2025-10-27T08:23:15.955673024Z" level=info msg="connecting to shim e5d3e86a8cbce9e7d4cea5c07bdea8f4895eb7dedddedc225eb02673bc808af4" address="unix:///run/containerd/s/3d62191636e3eac6c0a414c07b913ac882cedfacb75ebb6f01af3a48480b41c8" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:23:15.996787 kubelet[2398]: E1027 08:23:15.996724 2398 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://46.62.164.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 46.62.164.160:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 27 08:23:16.046586 systemd[1]: Started cri-containerd-e5d3e86a8cbce9e7d4cea5c07bdea8f4895eb7dedddedc225eb02673bc808af4.scope - libcontainer container e5d3e86a8cbce9e7d4cea5c07bdea8f4895eb7dedddedc225eb02673bc808af4. Oct 27 08:23:16.048214 systemd[1]: Started cri-containerd-fdf064aabf4179420f4374583a11f5b43eae18acc24b469bfdc2ae297f655039.scope - libcontainer container fdf064aabf4179420f4374583a11f5b43eae18acc24b469bfdc2ae297f655039. Oct 27 08:23:16.052598 systemd[1]: Started cri-containerd-e8e82b981cb185b0817d3a7183caaf694e5ff20a9652b2143f2e4fce86ecd211.scope - libcontainer container e8e82b981cb185b0817d3a7183caaf694e5ff20a9652b2143f2e4fce86ecd211. Oct 27 08:23:16.138986 containerd[1628]: time="2025-10-27T08:23:16.138913258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-9999-9-9-k-f136f833c6,Uid:f4cdd35661db4b76d371dbbffd98b4fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5d3e86a8cbce9e7d4cea5c07bdea8f4895eb7dedddedc225eb02673bc808af4\"" Oct 27 08:23:16.146469 containerd[1628]: time="2025-10-27T08:23:16.146386651Z" level=info msg="CreateContainer within sandbox \"e5d3e86a8cbce9e7d4cea5c07bdea8f4895eb7dedddedc225eb02673bc808af4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 27 08:23:16.147689 containerd[1628]: time="2025-10-27T08:23:16.147062158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-9999-9-9-k-f136f833c6,Uid:8edbedca1c8a0dd93a87b5a4eaf05b5d,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdf064aabf4179420f4374583a11f5b43eae18acc24b469bfdc2ae297f655039\"" Oct 27 08:23:16.155566 containerd[1628]: time="2025-10-27T08:23:16.155521270Z" level=info msg="CreateContainer within sandbox \"fdf064aabf4179420f4374583a11f5b43eae18acc24b469bfdc2ae297f655039\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 27 08:23:16.158227 containerd[1628]: time="2025-10-27T08:23:16.158204290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-9999-9-9-k-f136f833c6,Uid:a05b68404c8d7a13bb85317e1e3deb73,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8e82b981cb185b0817d3a7183caaf694e5ff20a9652b2143f2e4fce86ecd211\"" Oct 27 08:23:16.163469 containerd[1628]: time="2025-10-27T08:23:16.163375978Z" level=info msg="CreateContainer within sandbox \"e8e82b981cb185b0817d3a7183caaf694e5ff20a9652b2143f2e4fce86ecd211\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 27 08:23:16.166014 containerd[1628]: time="2025-10-27T08:23:16.165996772Z" level=info msg="Container 575ddd8d2d2647a53f10980be36100450a13a0e7909913ad36dd899650f9c7a0: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:23:16.166895 containerd[1628]: time="2025-10-27T08:23:16.166867004Z" level=info msg="Container 99d741af1de2a58b8e7001fbd317b88edcb9a93b7d67e28757cee7cb3de4b48f: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:23:16.177239 containerd[1628]: time="2025-10-27T08:23:16.177160616Z" level=info msg="Container 775d8600a4af87dc125a24d48648834ca7ea10a1e0d8e75eafbbae67ca5a5ffd: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:23:16.179164 containerd[1628]: time="2025-10-27T08:23:16.179135328Z" level=info msg="CreateContainer within sandbox \"e5d3e86a8cbce9e7d4cea5c07bdea8f4895eb7dedddedc225eb02673bc808af4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"575ddd8d2d2647a53f10980be36100450a13a0e7909913ad36dd899650f9c7a0\"" Oct 27 08:23:16.180512 containerd[1628]: time="2025-10-27T08:23:16.180303969Z" level=info msg="StartContainer for \"575ddd8d2d2647a53f10980be36100450a13a0e7909913ad36dd899650f9c7a0\"" Oct 27 08:23:16.181079 containerd[1628]: time="2025-10-27T08:23:16.181061901Z" level=info msg="CreateContainer within sandbox \"fdf064aabf4179420f4374583a11f5b43eae18acc24b469bfdc2ae297f655039\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"99d741af1de2a58b8e7001fbd317b88edcb9a93b7d67e28757cee7cb3de4b48f\"" Oct 27 08:23:16.181379 containerd[1628]: time="2025-10-27T08:23:16.181352967Z" level=info msg="connecting to shim 575ddd8d2d2647a53f10980be36100450a13a0e7909913ad36dd899650f9c7a0" address="unix:///run/containerd/s/3d62191636e3eac6c0a414c07b913ac882cedfacb75ebb6f01af3a48480b41c8" protocol=ttrpc version=3 Oct 27 08:23:16.181631 containerd[1628]: time="2025-10-27T08:23:16.181540098Z" level=info msg="StartContainer for \"99d741af1de2a58b8e7001fbd317b88edcb9a93b7d67e28757cee7cb3de4b48f\"" Oct 27 08:23:16.183527 containerd[1628]: time="2025-10-27T08:23:16.183504561Z" level=info msg="CreateContainer within sandbox \"e8e82b981cb185b0817d3a7183caaf694e5ff20a9652b2143f2e4fce86ecd211\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"775d8600a4af87dc125a24d48648834ca7ea10a1e0d8e75eafbbae67ca5a5ffd\"" Oct 27 08:23:16.183735 containerd[1628]: time="2025-10-27T08:23:16.183514169Z" level=info msg="connecting to shim 99d741af1de2a58b8e7001fbd317b88edcb9a93b7d67e28757cee7cb3de4b48f" address="unix:///run/containerd/s/15c08abf432f376634bf1b141c909b2babdee3618c83eb7ef2281b72764b924d" protocol=ttrpc version=3 Oct 27 08:23:16.184214 containerd[1628]: time="2025-10-27T08:23:16.184196559Z" level=info msg="StartContainer for \"775d8600a4af87dc125a24d48648834ca7ea10a1e0d8e75eafbbae67ca5a5ffd\"" Oct 27 08:23:16.185500 containerd[1628]: time="2025-10-27T08:23:16.185429851Z" level=info msg="connecting to shim 775d8600a4af87dc125a24d48648834ca7ea10a1e0d8e75eafbbae67ca5a5ffd" address="unix:///run/containerd/s/c0d53ad9060f82447263bcd15514d9543f802979296e7589f4964e87ef6b898d" protocol=ttrpc version=3 Oct 27 08:23:16.211580 systemd[1]: Started cri-containerd-575ddd8d2d2647a53f10980be36100450a13a0e7909913ad36dd899650f9c7a0.scope - libcontainer container 575ddd8d2d2647a53f10980be36100450a13a0e7909913ad36dd899650f9c7a0. Oct 27 08:23:16.221676 systemd[1]: Started cri-containerd-775d8600a4af87dc125a24d48648834ca7ea10a1e0d8e75eafbbae67ca5a5ffd.scope - libcontainer container 775d8600a4af87dc125a24d48648834ca7ea10a1e0d8e75eafbbae67ca5a5ffd. Oct 27 08:23:16.223155 systemd[1]: Started cri-containerd-99d741af1de2a58b8e7001fbd317b88edcb9a93b7d67e28757cee7cb3de4b48f.scope - libcontainer container 99d741af1de2a58b8e7001fbd317b88edcb9a93b7d67e28757cee7cb3de4b48f. Oct 27 08:23:16.323473 containerd[1628]: time="2025-10-27T08:23:16.323046025Z" level=info msg="StartContainer for \"575ddd8d2d2647a53f10980be36100450a13a0e7909913ad36dd899650f9c7a0\" returns successfully" Oct 27 08:23:16.326180 containerd[1628]: time="2025-10-27T08:23:16.326129557Z" level=info msg="StartContainer for \"775d8600a4af87dc125a24d48648834ca7ea10a1e0d8e75eafbbae67ca5a5ffd\" returns successfully" Oct 27 08:23:16.333070 kubelet[2398]: E1027 08:23:16.333025 2398 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.164.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-9999-9-9-k-f136f833c6?timeout=10s\": dial tcp 46.62.164.160:6443: connect: connection refused" interval="1.6s" Oct 27 08:23:16.336463 containerd[1628]: time="2025-10-27T08:23:16.336233032Z" level=info msg="StartContainer for \"99d741af1de2a58b8e7001fbd317b88edcb9a93b7d67e28757cee7cb3de4b48f\" returns successfully" Oct 27 08:23:16.499196 kubelet[2398]: I1027 08:23:16.499069 2398 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999-9-9-k-f136f833c6" Oct 27 08:23:16.499364 kubelet[2398]: E1027 08:23:16.499339 2398 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.164.160:6443/api/v1/nodes\": dial tcp 46.62.164.160:6443: connect: connection refused" node="ci-9999-9-9-k-f136f833c6" Oct 27 08:23:16.966816 kubelet[2398]: E1027 08:23:16.966777 2398 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-9-k-f136f833c6\" not found" node="ci-9999-9-9-k-f136f833c6" Oct 27 08:23:16.970073 kubelet[2398]: E1027 08:23:16.970055 2398 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-9-k-f136f833c6\" not found" node="ci-9999-9-9-k-f136f833c6" Oct 27 08:23:16.972670 kubelet[2398]: E1027 08:23:16.972652 2398 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-9-k-f136f833c6\" not found" node="ci-9999-9-9-k-f136f833c6" Oct 27 08:23:17.534531 systemd[1]: Started sshd@7-46.62.164.160:22-177.234.145.2:59262.service - OpenSSH per-connection server daemon (177.234.145.2:59262). Oct 27 08:23:17.978834 kubelet[2398]: E1027 08:23:17.978809 2398 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-9-k-f136f833c6\" not found" node="ci-9999-9-9-k-f136f833c6" Oct 27 08:23:17.979130 kubelet[2398]: E1027 08:23:17.979053 2398 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-9-k-f136f833c6\" not found" node="ci-9999-9-9-k-f136f833c6" Oct 27 08:23:18.064142 kubelet[2398]: E1027 08:23:18.064103 2398 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-9999-9-9-k-f136f833c6\" not found" node="ci-9999-9-9-k-f136f833c6" Oct 27 08:23:18.102425 kubelet[2398]: I1027 08:23:18.102395 2398 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999-9-9-k-f136f833c6" Oct 27 08:23:18.214972 kubelet[2398]: I1027 08:23:18.214928 2398 kubelet_node_status.go:78] "Successfully registered node" node="ci-9999-9-9-k-f136f833c6" Oct 27 08:23:18.214972 kubelet[2398]: E1027 08:23:18.214966 2398 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-9999-9-9-k-f136f833c6\": node \"ci-9999-9-9-k-f136f833c6\" not found" Oct 27 08:23:18.226129 kubelet[2398]: E1027 08:23:18.226092 2398 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-9999-9-9-k-f136f833c6\" not found" Oct 27 08:23:18.327339 kubelet[2398]: E1027 08:23:18.327194 2398 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-9999-9-9-k-f136f833c6\" not found" Oct 27 08:23:18.427936 kubelet[2398]: E1027 08:23:18.427874 2398 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-9999-9-9-k-f136f833c6\" not found" Oct 27 08:23:18.528951 kubelet[2398]: E1027 08:23:18.528900 2398 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-9999-9-9-k-f136f833c6\" not found" Oct 27 08:23:18.629775 kubelet[2398]: E1027 08:23:18.629710 2398 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-9999-9-9-k-f136f833c6\" not found" Oct 27 08:23:18.699507 sshd[2679]: Invalid user torrent from 177.234.145.2 port 59262 Oct 27 08:23:18.730742 kubelet[2398]: E1027 08:23:18.730665 2398 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-9999-9-9-k-f136f833c6\" not found" Oct 27 08:23:18.823767 kubelet[2398]: I1027 08:23:18.823693 2398 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-9999-9-9-k-f136f833c6" Oct 27 08:23:18.830033 kubelet[2398]: E1027 08:23:18.829981 2398 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-9999-9-9-k-f136f833c6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-9999-9-9-k-f136f833c6" Oct 27 08:23:18.830033 kubelet[2398]: I1027 08:23:18.830010 2398 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-9999-9-9-k-f136f833c6" Oct 27 08:23:18.834303 kubelet[2398]: E1027 08:23:18.834254 2398 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-9999-9-9-k-f136f833c6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-9999-9-9-k-f136f833c6" Oct 27 08:23:18.834303 kubelet[2398]: I1027 08:23:18.834284 2398 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-9999-9-9-k-f136f833c6" Oct 27 08:23:18.836820 kubelet[2398]: E1027 08:23:18.836745 2398 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-9999-9-9-k-f136f833c6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-9999-9-9-k-f136f833c6" Oct 27 08:23:18.895843 kubelet[2398]: I1027 08:23:18.895490 2398 apiserver.go:52] "Watching apiserver" Oct 27 08:23:18.923172 sshd[2679]: Received disconnect from 177.234.145.2 port 59262:11: Bye Bye [preauth] Oct 27 08:23:18.923172 sshd[2679]: Disconnected from invalid user torrent 177.234.145.2 port 59262 [preauth] Oct 27 08:23:18.924580 systemd[1]: sshd@7-46.62.164.160:22-177.234.145.2:59262.service: Deactivated successfully. Oct 27 08:23:18.925256 kubelet[2398]: I1027 08:23:18.924675 2398 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 27 08:23:18.978409 kubelet[2398]: I1027 08:23:18.978354 2398 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-9999-9-9-k-f136f833c6" Oct 27 08:23:19.009797 systemd[1]: Started sshd@8-46.62.164.160:22-182.40.195.233:40668.service - OpenSSH per-connection server daemon (182.40.195.233:40668). Oct 27 08:23:20.177984 systemd[1]: Reload requested from client PID 2690 ('systemctl') (unit session-7.scope)... Oct 27 08:23:20.178003 systemd[1]: Reloading... Oct 27 08:23:20.294483 zram_generator::config[2737]: No configuration found. Oct 27 08:23:20.473641 systemd[1]: Reloading finished in 295 ms. Oct 27 08:23:20.507282 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:23:20.531354 systemd[1]: kubelet.service: Deactivated successfully. Oct 27 08:23:20.531598 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:23:20.531650 systemd[1]: kubelet.service: Consumed 982ms CPU time, 124M memory peak. Oct 27 08:23:20.533204 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 08:23:20.653427 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 08:23:20.659873 (kubelet)[2788]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 27 08:23:20.712229 kubelet[2788]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 27 08:23:20.713466 kubelet[2788]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 08:23:20.713466 kubelet[2788]: I1027 08:23:20.712574 2788 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 27 08:23:20.718338 kubelet[2788]: I1027 08:23:20.718311 2788 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 27 08:23:20.718338 kubelet[2788]: I1027 08:23:20.718331 2788 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 27 08:23:20.718401 kubelet[2788]: I1027 08:23:20.718347 2788 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 27 08:23:20.718401 kubelet[2788]: I1027 08:23:20.718352 2788 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 27 08:23:20.718642 kubelet[2788]: I1027 08:23:20.718621 2788 server.go:956] "Client rotation is on, will bootstrap in background" Oct 27 08:23:20.719618 kubelet[2788]: I1027 08:23:20.719598 2788 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 27 08:23:20.730588 kubelet[2788]: I1027 08:23:20.730523 2788 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 27 08:23:20.734496 kubelet[2788]: I1027 08:23:20.733651 2788 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 27 08:23:20.735688 kubelet[2788]: I1027 08:23:20.735664 2788 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 27 08:23:20.736352 kubelet[2788]: I1027 08:23:20.736319 2788 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 27 08:23:20.736499 kubelet[2788]: I1027 08:23:20.736351 2788 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-9999-9-9-k-f136f833c6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 27 08:23:20.736575 kubelet[2788]: I1027 08:23:20.736500 2788 topology_manager.go:138] "Creating topology manager with none policy" Oct 27 08:23:20.736575 kubelet[2788]: I1027 08:23:20.736508 2788 container_manager_linux.go:306] "Creating device plugin manager" Oct 27 08:23:20.736575 kubelet[2788]: I1027 08:23:20.736527 2788 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 27 08:23:20.737247 kubelet[2788]: I1027 08:23:20.737217 2788 state_mem.go:36] "Initialized new in-memory state store" Oct 27 08:23:20.740520 kubelet[2788]: I1027 08:23:20.740493 2788 kubelet.go:475] "Attempting to sync node with API server" Oct 27 08:23:20.740520 kubelet[2788]: I1027 08:23:20.740510 2788 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 27 08:23:20.740582 kubelet[2788]: I1027 08:23:20.740532 2788 kubelet.go:387] "Adding apiserver pod source" Oct 27 08:23:20.740907 kubelet[2788]: I1027 08:23:20.740860 2788 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 27 08:23:20.743098 kubelet[2788]: I1027 08:23:20.743070 2788 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 27 08:23:20.744647 kubelet[2788]: I1027 08:23:20.744539 2788 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 27 08:23:20.744647 kubelet[2788]: I1027 08:23:20.744569 2788 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 27 08:23:20.752835 kubelet[2788]: I1027 08:23:20.752817 2788 server.go:1262] "Started kubelet" Oct 27 08:23:20.754131 kubelet[2788]: I1027 08:23:20.754053 2788 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 27 08:23:20.756484 kubelet[2788]: I1027 08:23:20.755217 2788 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 27 08:23:20.756484 kubelet[2788]: I1027 08:23:20.755252 2788 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 27 08:23:20.756484 kubelet[2788]: I1027 08:23:20.755432 2788 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 27 08:23:20.757887 kubelet[2788]: I1027 08:23:20.757873 2788 server.go:310] "Adding debug handlers to kubelet server" Oct 27 08:23:20.763557 kubelet[2788]: I1027 08:23:20.763542 2788 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 27 08:23:20.775655 kubelet[2788]: I1027 08:23:20.775636 2788 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 27 08:23:20.778261 kubelet[2788]: I1027 08:23:20.778223 2788 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 27 08:23:20.778323 kubelet[2788]: I1027 08:23:20.778290 2788 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 27 08:23:20.778374 kubelet[2788]: I1027 08:23:20.778360 2788 reconciler.go:29] "Reconciler: start to sync state" Oct 27 08:23:20.782568 kubelet[2788]: I1027 08:23:20.782369 2788 factory.go:223] Registration of the systemd container factory successfully Oct 27 08:23:20.782709 kubelet[2788]: I1027 08:23:20.782692 2788 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 27 08:23:20.783149 kubelet[2788]: I1027 08:23:20.782593 2788 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 27 08:23:20.784618 kubelet[2788]: I1027 08:23:20.784470 2788 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 27 08:23:20.784695 kubelet[2788]: I1027 08:23:20.784683 2788 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 27 08:23:20.784772 kubelet[2788]: I1027 08:23:20.784765 2788 kubelet.go:2427] "Starting kubelet main sync loop" Oct 27 08:23:20.784840 kubelet[2788]: E1027 08:23:20.784828 2788 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 27 08:23:20.785566 kubelet[2788]: E1027 08:23:20.785508 2788 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 27 08:23:20.791018 kubelet[2788]: I1027 08:23:20.791005 2788 factory.go:223] Registration of the containerd container factory successfully Oct 27 08:23:20.833881 kubelet[2788]: I1027 08:23:20.833492 2788 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 27 08:23:20.833881 kubelet[2788]: I1027 08:23:20.833518 2788 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 27 08:23:20.833881 kubelet[2788]: I1027 08:23:20.833534 2788 state_mem.go:36] "Initialized new in-memory state store" Oct 27 08:23:20.833881 kubelet[2788]: I1027 08:23:20.833626 2788 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 27 08:23:20.833881 kubelet[2788]: I1027 08:23:20.833634 2788 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 27 08:23:20.833881 kubelet[2788]: I1027 08:23:20.833648 2788 policy_none.go:49] "None policy: Start" Oct 27 08:23:20.833881 kubelet[2788]: I1027 08:23:20.833656 2788 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 27 08:23:20.833881 kubelet[2788]: I1027 08:23:20.833663 2788 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 27 08:23:20.833881 kubelet[2788]: I1027 08:23:20.833728 2788 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 27 08:23:20.833881 kubelet[2788]: I1027 08:23:20.833734 2788 policy_none.go:47] "Start" Oct 27 08:23:20.837786 kubelet[2788]: E1027 08:23:20.837766 2788 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 27 08:23:20.837900 kubelet[2788]: I1027 08:23:20.837882 2788 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 27 08:23:20.837947 kubelet[2788]: I1027 08:23:20.837895 2788 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 27 08:23:20.838712 kubelet[2788]: I1027 08:23:20.838697 2788 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 27 08:23:20.842342 kubelet[2788]: E1027 08:23:20.842015 2788 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 27 08:23:20.886099 kubelet[2788]: I1027 08:23:20.886052 2788 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-9999-9-9-k-f136f833c6" Oct 27 08:23:20.886586 kubelet[2788]: I1027 08:23:20.886563 2788 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-9999-9-9-k-f136f833c6" Oct 27 08:23:20.889301 kubelet[2788]: I1027 08:23:20.886309 2788 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-9999-9-9-k-f136f833c6" Oct 27 08:23:20.903493 kubelet[2788]: E1027 08:23:20.903439 2788 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-9999-9-9-k-f136f833c6\" already exists" pod="kube-system/kube-scheduler-ci-9999-9-9-k-f136f833c6" Oct 27 08:23:20.943375 kubelet[2788]: I1027 08:23:20.943337 2788 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999-9-9-k-f136f833c6" Oct 27 08:23:20.950289 kubelet[2788]: I1027 08:23:20.950264 2788 kubelet_node_status.go:124] "Node was previously registered" node="ci-9999-9-9-k-f136f833c6" Oct 27 08:23:20.950354 kubelet[2788]: I1027 08:23:20.950335 2788 kubelet_node_status.go:78] "Successfully registered node" node="ci-9999-9-9-k-f136f833c6" Oct 27 08:23:20.981431 kubelet[2788]: I1027 08:23:20.981337 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8edbedca1c8a0dd93a87b5a4eaf05b5d-k8s-certs\") pod \"kube-apiserver-ci-9999-9-9-k-f136f833c6\" (UID: \"8edbedca1c8a0dd93a87b5a4eaf05b5d\") " pod="kube-system/kube-apiserver-ci-9999-9-9-k-f136f833c6" Oct 27 08:23:20.981431 kubelet[2788]: I1027 08:23:20.981369 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a05b68404c8d7a13bb85317e1e3deb73-k8s-certs\") pod \"kube-controller-manager-ci-9999-9-9-k-f136f833c6\" (UID: \"a05b68404c8d7a13bb85317e1e3deb73\") " pod="kube-system/kube-controller-manager-ci-9999-9-9-k-f136f833c6" Oct 27 08:23:20.981431 kubelet[2788]: I1027 08:23:20.981389 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a05b68404c8d7a13bb85317e1e3deb73-kubeconfig\") pod \"kube-controller-manager-ci-9999-9-9-k-f136f833c6\" (UID: \"a05b68404c8d7a13bb85317e1e3deb73\") " pod="kube-system/kube-controller-manager-ci-9999-9-9-k-f136f833c6" Oct 27 08:23:20.981431 kubelet[2788]: I1027 08:23:20.981411 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f4cdd35661db4b76d371dbbffd98b4fa-kubeconfig\") pod \"kube-scheduler-ci-9999-9-9-k-f136f833c6\" (UID: \"f4cdd35661db4b76d371dbbffd98b4fa\") " pod="kube-system/kube-scheduler-ci-9999-9-9-k-f136f833c6" Oct 27 08:23:20.981593 kubelet[2788]: I1027 08:23:20.981575 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8edbedca1c8a0dd93a87b5a4eaf05b5d-ca-certs\") pod \"kube-apiserver-ci-9999-9-9-k-f136f833c6\" (UID: \"8edbedca1c8a0dd93a87b5a4eaf05b5d\") " pod="kube-system/kube-apiserver-ci-9999-9-9-k-f136f833c6" Oct 27 08:23:20.981618 kubelet[2788]: I1027 08:23:20.981600 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8edbedca1c8a0dd93a87b5a4eaf05b5d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-9999-9-9-k-f136f833c6\" (UID: \"8edbedca1c8a0dd93a87b5a4eaf05b5d\") " pod="kube-system/kube-apiserver-ci-9999-9-9-k-f136f833c6" Oct 27 08:23:20.981618 kubelet[2788]: I1027 08:23:20.981615 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a05b68404c8d7a13bb85317e1e3deb73-ca-certs\") pod \"kube-controller-manager-ci-9999-9-9-k-f136f833c6\" (UID: \"a05b68404c8d7a13bb85317e1e3deb73\") " pod="kube-system/kube-controller-manager-ci-9999-9-9-k-f136f833c6" Oct 27 08:23:20.981663 kubelet[2788]: I1027 08:23:20.981628 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a05b68404c8d7a13bb85317e1e3deb73-flexvolume-dir\") pod \"kube-controller-manager-ci-9999-9-9-k-f136f833c6\" (UID: \"a05b68404c8d7a13bb85317e1e3deb73\") " pod="kube-system/kube-controller-manager-ci-9999-9-9-k-f136f833c6" Oct 27 08:23:20.981663 kubelet[2788]: I1027 08:23:20.981643 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a05b68404c8d7a13bb85317e1e3deb73-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-9999-9-9-k-f136f833c6\" (UID: \"a05b68404c8d7a13bb85317e1e3deb73\") " pod="kube-system/kube-controller-manager-ci-9999-9-9-k-f136f833c6" Oct 27 08:23:21.743221 kubelet[2788]: I1027 08:23:21.743168 2788 apiserver.go:52] "Watching apiserver" Oct 27 08:23:21.775979 kubelet[2788]: I1027 08:23:21.775372 2788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-9999-9-9-k-f136f833c6" podStartSLOduration=3.775347005 podStartE2EDuration="3.775347005s" podCreationTimestamp="2025-10-27 08:23:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:23:21.775184891 +0000 UTC m=+1.109922238" watchObservedRunningTime="2025-10-27 08:23:21.775347005 +0000 UTC m=+1.110084352" Oct 27 08:23:21.778467 kubelet[2788]: I1027 08:23:21.778417 2788 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 27 08:23:21.795055 kubelet[2788]: I1027 08:23:21.794941 2788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-9999-9-9-k-f136f833c6" podStartSLOduration=1.794921228 podStartE2EDuration="1.794921228s" podCreationTimestamp="2025-10-27 08:23:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:23:21.785964694 +0000 UTC m=+1.120702032" watchObservedRunningTime="2025-10-27 08:23:21.794921228 +0000 UTC m=+1.129658576" Oct 27 08:23:21.817867 kubelet[2788]: I1027 08:23:21.817823 2788 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-9999-9-9-k-f136f833c6" Oct 27 08:23:21.818513 kubelet[2788]: I1027 08:23:21.818180 2788 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-9999-9-9-k-f136f833c6" Oct 27 08:23:21.825996 kubelet[2788]: E1027 08:23:21.825843 2788 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-9999-9-9-k-f136f833c6\" already exists" pod="kube-system/kube-apiserver-ci-9999-9-9-k-f136f833c6" Oct 27 08:23:21.826905 kubelet[2788]: E1027 08:23:21.826860 2788 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-9999-9-9-k-f136f833c6\" already exists" pod="kube-system/kube-scheduler-ci-9999-9-9-k-f136f833c6" Oct 27 08:23:21.829642 kubelet[2788]: I1027 08:23:21.829573 2788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-9999-9-9-k-f136f833c6" podStartSLOduration=1.829557415 podStartE2EDuration="1.829557415s" podCreationTimestamp="2025-10-27 08:23:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:23:21.795894123 +0000 UTC m=+1.130631470" watchObservedRunningTime="2025-10-27 08:23:21.829557415 +0000 UTC m=+1.164294762" Oct 27 08:23:22.733069 sshd[2686]: Received disconnect from 182.40.195.233 port 40668:11: Bye Bye [preauth] Oct 27 08:23:22.733069 sshd[2686]: Disconnected from authenticating user root 182.40.195.233 port 40668 [preauth] Oct 27 08:23:22.735095 systemd[1]: sshd@8-46.62.164.160:22-182.40.195.233:40668.service: Deactivated successfully. Oct 27 08:23:27.385761 kubelet[2788]: I1027 08:23:27.385711 2788 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 27 08:23:27.386147 containerd[1628]: time="2025-10-27T08:23:27.386038353Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 27 08:23:27.386329 kubelet[2788]: I1027 08:23:27.386192 2788 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 27 08:23:28.114495 systemd[1]: Created slice kubepods-besteffort-pod0ef5bc99_a58f_46c4_8791_83ff2e433a87.slice - libcontainer container kubepods-besteffort-pod0ef5bc99_a58f_46c4_8791_83ff2e433a87.slice. Oct 27 08:23:28.135023 kubelet[2788]: I1027 08:23:28.134988 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0ef5bc99-a58f-46c4-8791-83ff2e433a87-xtables-lock\") pod \"kube-proxy-cmrrv\" (UID: \"0ef5bc99-a58f-46c4-8791-83ff2e433a87\") " pod="kube-system/kube-proxy-cmrrv" Oct 27 08:23:28.135023 kubelet[2788]: I1027 08:23:28.135029 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0ef5bc99-a58f-46c4-8791-83ff2e433a87-lib-modules\") pod \"kube-proxy-cmrrv\" (UID: \"0ef5bc99-a58f-46c4-8791-83ff2e433a87\") " pod="kube-system/kube-proxy-cmrrv" Oct 27 08:23:28.135184 kubelet[2788]: I1027 08:23:28.135069 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0ef5bc99-a58f-46c4-8791-83ff2e433a87-kube-proxy\") pod \"kube-proxy-cmrrv\" (UID: \"0ef5bc99-a58f-46c4-8791-83ff2e433a87\") " pod="kube-system/kube-proxy-cmrrv" Oct 27 08:23:28.135184 kubelet[2788]: I1027 08:23:28.135085 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2q65\" (UniqueName: \"kubernetes.io/projected/0ef5bc99-a58f-46c4-8791-83ff2e433a87-kube-api-access-k2q65\") pod \"kube-proxy-cmrrv\" (UID: \"0ef5bc99-a58f-46c4-8791-83ff2e433a87\") " pod="kube-system/kube-proxy-cmrrv" Oct 27 08:23:28.428029 containerd[1628]: time="2025-10-27T08:23:28.427671190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cmrrv,Uid:0ef5bc99-a58f-46c4-8791-83ff2e433a87,Namespace:kube-system,Attempt:0,}" Oct 27 08:23:28.447171 containerd[1628]: time="2025-10-27T08:23:28.447036803Z" level=info msg="connecting to shim 9ada1f76c3585afb8bdef530dfa3ccaeac10ef466b2d9724d232f399f3ff3aa1" address="unix:///run/containerd/s/53c52b4bdf56512c884ca7b472c38c3b1a972873c2f8490be66c7571cccc67e8" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:23:28.478738 systemd[1]: Started cri-containerd-9ada1f76c3585afb8bdef530dfa3ccaeac10ef466b2d9724d232f399f3ff3aa1.scope - libcontainer container 9ada1f76c3585afb8bdef530dfa3ccaeac10ef466b2d9724d232f399f3ff3aa1. Oct 27 08:23:28.567677 systemd[1]: Created slice kubepods-besteffort-podc2430e51_b5ad_4e47_8fac_aa1a5b9f7219.slice - libcontainer container kubepods-besteffort-podc2430e51_b5ad_4e47_8fac_aa1a5b9f7219.slice. Oct 27 08:23:28.571801 kubelet[2788]: E1027 08:23:28.571768 2788 status_manager.go:1018] "Failed to get status for pod" err="pods \"tigera-operator-65cdcdfd6d-qg2hz\" is forbidden: User \"system:node:ci-9999-9-9-k-f136f833c6\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-9999-9-9-k-f136f833c6' and this object" podUID="c2430e51-b5ad-4e47-8fac-aa1a5b9f7219" pod="tigera-operator/tigera-operator-65cdcdfd6d-qg2hz" Oct 27 08:23:28.573089 kubelet[2788]: E1027 08:23:28.571979 2788 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:ci-9999-9-9-k-f136f833c6\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-9999-9-9-k-f136f833c6' and this object" logger="UnhandledError" reflector="object-\"tigera-operator\"/\"kubernetes-services-endpoint\"" type="*v1.ConfigMap" Oct 27 08:23:28.573089 kubelet[2788]: E1027 08:23:28.572701 2788 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-9999-9-9-k-f136f833c6\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-9999-9-9-k-f136f833c6' and this object" logger="UnhandledError" reflector="object-\"tigera-operator\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Oct 27 08:23:28.611541 containerd[1628]: time="2025-10-27T08:23:28.611431019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cmrrv,Uid:0ef5bc99-a58f-46c4-8791-83ff2e433a87,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ada1f76c3585afb8bdef530dfa3ccaeac10ef466b2d9724d232f399f3ff3aa1\"" Oct 27 08:23:28.620924 containerd[1628]: time="2025-10-27T08:23:28.620718554Z" level=info msg="CreateContainer within sandbox \"9ada1f76c3585afb8bdef530dfa3ccaeac10ef466b2d9724d232f399f3ff3aa1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 27 08:23:28.637272 kubelet[2788]: I1027 08:23:28.637220 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84wfs\" (UniqueName: \"kubernetes.io/projected/c2430e51-b5ad-4e47-8fac-aa1a5b9f7219-kube-api-access-84wfs\") pod \"tigera-operator-65cdcdfd6d-qg2hz\" (UID: \"c2430e51-b5ad-4e47-8fac-aa1a5b9f7219\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-qg2hz" Oct 27 08:23:28.637569 kubelet[2788]: I1027 08:23:28.637414 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c2430e51-b5ad-4e47-8fac-aa1a5b9f7219-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-qg2hz\" (UID: \"c2430e51-b5ad-4e47-8fac-aa1a5b9f7219\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-qg2hz" Oct 27 08:23:28.643194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3032530410.mount: Deactivated successfully. Oct 27 08:23:28.644396 containerd[1628]: time="2025-10-27T08:23:28.643602684Z" level=info msg="Container 59c96519751f0bcc8157eb491678c09210154642cf5c96420f5c405c945a2cff: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:23:28.651615 containerd[1628]: time="2025-10-27T08:23:28.651591554Z" level=info msg="CreateContainer within sandbox \"9ada1f76c3585afb8bdef530dfa3ccaeac10ef466b2d9724d232f399f3ff3aa1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"59c96519751f0bcc8157eb491678c09210154642cf5c96420f5c405c945a2cff\"" Oct 27 08:23:28.653087 containerd[1628]: time="2025-10-27T08:23:28.653071228Z" level=info msg="StartContainer for \"59c96519751f0bcc8157eb491678c09210154642cf5c96420f5c405c945a2cff\"" Oct 27 08:23:28.654397 containerd[1628]: time="2025-10-27T08:23:28.654377758Z" level=info msg="connecting to shim 59c96519751f0bcc8157eb491678c09210154642cf5c96420f5c405c945a2cff" address="unix:///run/containerd/s/53c52b4bdf56512c884ca7b472c38c3b1a972873c2f8490be66c7571cccc67e8" protocol=ttrpc version=3 Oct 27 08:23:28.675582 systemd[1]: Started cri-containerd-59c96519751f0bcc8157eb491678c09210154642cf5c96420f5c405c945a2cff.scope - libcontainer container 59c96519751f0bcc8157eb491678c09210154642cf5c96420f5c405c945a2cff. Oct 27 08:23:28.708553 containerd[1628]: time="2025-10-27T08:23:28.708115081Z" level=info msg="StartContainer for \"59c96519751f0bcc8157eb491678c09210154642cf5c96420f5c405c945a2cff\" returns successfully" Oct 27 08:23:28.883704 kubelet[2788]: I1027 08:23:28.883543 2788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cmrrv" podStartSLOduration=0.88351801 podStartE2EDuration="883.51801ms" podCreationTimestamp="2025-10-27 08:23:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:23:28.865569133 +0000 UTC m=+8.200306511" watchObservedRunningTime="2025-10-27 08:23:28.88351801 +0000 UTC m=+8.218255376" Oct 27 08:23:29.252784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1132292735.mount: Deactivated successfully. Oct 27 08:23:29.753134 kubelet[2788]: E1027 08:23:29.753055 2788 projected.go:291] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Oct 27 08:23:29.753134 kubelet[2788]: E1027 08:23:29.753117 2788 projected.go:196] Error preparing data for projected volume kube-api-access-84wfs for pod tigera-operator/tigera-operator-65cdcdfd6d-qg2hz: failed to sync configmap cache: timed out waiting for the condition Oct 27 08:23:29.753830 kubelet[2788]: E1027 08:23:29.753227 2788 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2430e51-b5ad-4e47-8fac-aa1a5b9f7219-kube-api-access-84wfs podName:c2430e51-b5ad-4e47-8fac-aa1a5b9f7219 nodeName:}" failed. No retries permitted until 2025-10-27 08:23:30.253195734 +0000 UTC m=+9.587933101 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-84wfs" (UniqueName: "kubernetes.io/projected/c2430e51-b5ad-4e47-8fac-aa1a5b9f7219-kube-api-access-84wfs") pod "tigera-operator-65cdcdfd6d-qg2hz" (UID: "c2430e51-b5ad-4e47-8fac-aa1a5b9f7219") : failed to sync configmap cache: timed out waiting for the condition Oct 27 08:23:30.373614 containerd[1628]: time="2025-10-27T08:23:30.373566072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-qg2hz,Uid:c2430e51-b5ad-4e47-8fac-aa1a5b9f7219,Namespace:tigera-operator,Attempt:0,}" Oct 27 08:23:30.396061 containerd[1628]: time="2025-10-27T08:23:30.396010737Z" level=info msg="connecting to shim 4805f5faa3b5d1650050df77bb07837020083b40680ddbdacc9f7482c2886694" address="unix:///run/containerd/s/203ea2fa2d4c1ed4d7b17754baf76edff388cabb5e513aacaf79691ed1326419" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:23:30.425578 systemd[1]: Started cri-containerd-4805f5faa3b5d1650050df77bb07837020083b40680ddbdacc9f7482c2886694.scope - libcontainer container 4805f5faa3b5d1650050df77bb07837020083b40680ddbdacc9f7482c2886694. Oct 27 08:23:30.469473 containerd[1628]: time="2025-10-27T08:23:30.469360256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-qg2hz,Uid:c2430e51-b5ad-4e47-8fac-aa1a5b9f7219,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4805f5faa3b5d1650050df77bb07837020083b40680ddbdacc9f7482c2886694\"" Oct 27 08:23:30.473984 containerd[1628]: time="2025-10-27T08:23:30.473804208Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 27 08:23:31.325317 update_engine[1599]: I20251027 08:23:31.325235 1599 update_attempter.cc:509] Updating boot flags... Oct 27 08:23:34.290677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3788667190.mount: Deactivated successfully. Oct 27 08:23:37.819292 systemd[1]: Started sshd@9-46.62.164.160:22-103.181.143.69:48794.service - OpenSSH per-connection server daemon (103.181.143.69:48794). Oct 27 08:23:38.192876 systemd[1]: Started sshd@10-46.62.164.160:22-131.100.242.102:33582.service - OpenSSH per-connection server daemon (131.100.242.102:33582). Oct 27 08:23:38.398774 containerd[1628]: time="2025-10-27T08:23:38.398720337Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:23:38.399740 containerd[1628]: time="2025-10-27T08:23:38.399604467Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 27 08:23:38.400520 containerd[1628]: time="2025-10-27T08:23:38.400494433Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:23:38.402575 containerd[1628]: time="2025-10-27T08:23:38.402544039Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:23:38.403122 containerd[1628]: time="2025-10-27T08:23:38.403094548Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 7.929263255s" Oct 27 08:23:38.403197 containerd[1628]: time="2025-10-27T08:23:38.403184039Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 27 08:23:38.407615 containerd[1628]: time="2025-10-27T08:23:38.407582854Z" level=info msg="CreateContainer within sandbox \"4805f5faa3b5d1650050df77bb07837020083b40680ddbdacc9f7482c2886694\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 27 08:23:38.414886 containerd[1628]: time="2025-10-27T08:23:38.414856011Z" level=info msg="Container cbf69d5867f4d976d0ee5b0014f7e9bce4543c19031470e680fbd5d76a0da5b6: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:23:38.417007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2998945285.mount: Deactivated successfully. Oct 27 08:23:38.437230 containerd[1628]: time="2025-10-27T08:23:38.437130729Z" level=info msg="CreateContainer within sandbox \"4805f5faa3b5d1650050df77bb07837020083b40680ddbdacc9f7482c2886694\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"cbf69d5867f4d976d0ee5b0014f7e9bce4543c19031470e680fbd5d76a0da5b6\"" Oct 27 08:23:38.438149 containerd[1628]: time="2025-10-27T08:23:38.438131901Z" level=info msg="StartContainer for \"cbf69d5867f4d976d0ee5b0014f7e9bce4543c19031470e680fbd5d76a0da5b6\"" Oct 27 08:23:38.439005 containerd[1628]: time="2025-10-27T08:23:38.438939845Z" level=info msg="connecting to shim cbf69d5867f4d976d0ee5b0014f7e9bce4543c19031470e680fbd5d76a0da5b6" address="unix:///run/containerd/s/203ea2fa2d4c1ed4d7b17754baf76edff388cabb5e513aacaf79691ed1326419" protocol=ttrpc version=3 Oct 27 08:23:38.465674 systemd[1]: Started cri-containerd-cbf69d5867f4d976d0ee5b0014f7e9bce4543c19031470e680fbd5d76a0da5b6.scope - libcontainer container cbf69d5867f4d976d0ee5b0014f7e9bce4543c19031470e680fbd5d76a0da5b6. Oct 27 08:23:38.497633 containerd[1628]: time="2025-10-27T08:23:38.497594188Z" level=info msg="StartContainer for \"cbf69d5867f4d976d0ee5b0014f7e9bce4543c19031470e680fbd5d76a0da5b6\" returns successfully" Oct 27 08:23:38.917000 sshd[3118]: Invalid user twitter from 103.181.143.69 port 48794 Oct 27 08:23:39.127833 sshd[3118]: Received disconnect from 103.181.143.69 port 48794:11: Bye Bye [preauth] Oct 27 08:23:39.127833 sshd[3118]: Disconnected from invalid user twitter 103.181.143.69 port 48794 [preauth] Oct 27 08:23:39.131269 systemd[1]: sshd@9-46.62.164.160:22-103.181.143.69:48794.service: Deactivated successfully. Oct 27 08:23:39.433438 sshd[3126]: Invalid user spegni from 131.100.242.102 port 33582 Oct 27 08:23:39.664638 sshd[3126]: Received disconnect from 131.100.242.102 port 33582:11: Bye Bye [preauth] Oct 27 08:23:39.664638 sshd[3126]: Disconnected from invalid user spegni 131.100.242.102 port 33582 [preauth] Oct 27 08:23:39.666606 systemd[1]: sshd@10-46.62.164.160:22-131.100.242.102:33582.service: Deactivated successfully. Oct 27 08:23:44.357214 sudo[1859]: pam_unix(sudo:session): session closed for user root Oct 27 08:23:44.538760 sshd[1858]: Connection closed by 147.75.109.163 port 50172 Oct 27 08:23:44.541876 sshd-session[1855]: pam_unix(sshd:session): session closed for user core Oct 27 08:23:44.545758 systemd[1]: sshd@6-46.62.164.160:22-147.75.109.163:50172.service: Deactivated successfully. Oct 27 08:23:44.548258 systemd[1]: session-7.scope: Deactivated successfully. Oct 27 08:23:44.550540 systemd[1]: session-7.scope: Consumed 4.626s CPU time, 158.8M memory peak. Oct 27 08:23:44.554972 systemd-logind[1593]: Session 7 logged out. Waiting for processes to exit. Oct 27 08:23:44.555736 systemd-logind[1593]: Removed session 7. Oct 27 08:23:48.858740 kubelet[2788]: I1027 08:23:48.857015 2788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-qg2hz" podStartSLOduration=12.924502054 podStartE2EDuration="20.856995063s" podCreationTimestamp="2025-10-27 08:23:28 +0000 UTC" firstStartedPulling="2025-10-27 08:23:30.47137325 +0000 UTC m=+9.806110577" lastFinishedPulling="2025-10-27 08:23:38.403866249 +0000 UTC m=+17.738603586" observedRunningTime="2025-10-27 08:23:38.876185035 +0000 UTC m=+18.210922382" watchObservedRunningTime="2025-10-27 08:23:48.856995063 +0000 UTC m=+28.191732401" Oct 27 08:23:48.872597 systemd[1]: Created slice kubepods-besteffort-podf3ab72f5_4059_4da1_a4ad_68da08c1da3d.slice - libcontainer container kubepods-besteffort-podf3ab72f5_4059_4da1_a4ad_68da08c1da3d.slice. Oct 27 08:23:48.965154 kubelet[2788]: I1027 08:23:48.965065 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfhz9\" (UniqueName: \"kubernetes.io/projected/f3ab72f5-4059-4da1-a4ad-68da08c1da3d-kube-api-access-mfhz9\") pod \"calico-typha-66c55bddb7-qj8hw\" (UID: \"f3ab72f5-4059-4da1-a4ad-68da08c1da3d\") " pod="calico-system/calico-typha-66c55bddb7-qj8hw" Oct 27 08:23:48.965647 kubelet[2788]: I1027 08:23:48.965536 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f3ab72f5-4059-4da1-a4ad-68da08c1da3d-typha-certs\") pod \"calico-typha-66c55bddb7-qj8hw\" (UID: \"f3ab72f5-4059-4da1-a4ad-68da08c1da3d\") " pod="calico-system/calico-typha-66c55bddb7-qj8hw" Oct 27 08:23:48.965714 kubelet[2788]: I1027 08:23:48.965672 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3ab72f5-4059-4da1-a4ad-68da08c1da3d-tigera-ca-bundle\") pod \"calico-typha-66c55bddb7-qj8hw\" (UID: \"f3ab72f5-4059-4da1-a4ad-68da08c1da3d\") " pod="calico-system/calico-typha-66c55bddb7-qj8hw" Oct 27 08:23:49.111344 systemd[1]: Created slice kubepods-besteffort-podc9ad7606_0947_4b3f_89a2_323e60c349aa.slice - libcontainer container kubepods-besteffort-podc9ad7606_0947_4b3f_89a2_323e60c349aa.slice. Oct 27 08:23:49.168125 kubelet[2788]: I1027 08:23:49.167998 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c9ad7606-0947-4b3f-89a2-323e60c349aa-lib-modules\") pod \"calico-node-r9m24\" (UID: \"c9ad7606-0947-4b3f-89a2-323e60c349aa\") " pod="calico-system/calico-node-r9m24" Oct 27 08:23:49.168602 kubelet[2788]: I1027 08:23:49.168513 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c9ad7606-0947-4b3f-89a2-323e60c349aa-policysync\") pod \"calico-node-r9m24\" (UID: \"c9ad7606-0947-4b3f-89a2-323e60c349aa\") " pod="calico-system/calico-node-r9m24" Oct 27 08:23:49.169553 kubelet[2788]: I1027 08:23:49.168686 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c9ad7606-0947-4b3f-89a2-323e60c349aa-cni-net-dir\") pod \"calico-node-r9m24\" (UID: \"c9ad7606-0947-4b3f-89a2-323e60c349aa\") " pod="calico-system/calico-node-r9m24" Oct 27 08:23:49.169553 kubelet[2788]: I1027 08:23:49.168710 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c9ad7606-0947-4b3f-89a2-323e60c349aa-var-run-calico\") pod \"calico-node-r9m24\" (UID: \"c9ad7606-0947-4b3f-89a2-323e60c349aa\") " pod="calico-system/calico-node-r9m24" Oct 27 08:23:49.169553 kubelet[2788]: I1027 08:23:49.168747 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c9ad7606-0947-4b3f-89a2-323e60c349aa-node-certs\") pod \"calico-node-r9m24\" (UID: \"c9ad7606-0947-4b3f-89a2-323e60c349aa\") " pod="calico-system/calico-node-r9m24" Oct 27 08:23:49.169553 kubelet[2788]: I1027 08:23:49.168765 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp869\" (UniqueName: \"kubernetes.io/projected/c9ad7606-0947-4b3f-89a2-323e60c349aa-kube-api-access-lp869\") pod \"calico-node-r9m24\" (UID: \"c9ad7606-0947-4b3f-89a2-323e60c349aa\") " pod="calico-system/calico-node-r9m24" Oct 27 08:23:49.169553 kubelet[2788]: I1027 08:23:49.168790 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c9ad7606-0947-4b3f-89a2-323e60c349aa-flexvol-driver-host\") pod \"calico-node-r9m24\" (UID: \"c9ad7606-0947-4b3f-89a2-323e60c349aa\") " pod="calico-system/calico-node-r9m24" Oct 27 08:23:49.169720 kubelet[2788]: I1027 08:23:49.168810 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c9ad7606-0947-4b3f-89a2-323e60c349aa-cni-bin-dir\") pod \"calico-node-r9m24\" (UID: \"c9ad7606-0947-4b3f-89a2-323e60c349aa\") " pod="calico-system/calico-node-r9m24" Oct 27 08:23:49.169720 kubelet[2788]: I1027 08:23:49.168845 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9ad7606-0947-4b3f-89a2-323e60c349aa-tigera-ca-bundle\") pod \"calico-node-r9m24\" (UID: \"c9ad7606-0947-4b3f-89a2-323e60c349aa\") " pod="calico-system/calico-node-r9m24" Oct 27 08:23:49.169720 kubelet[2788]: I1027 08:23:49.168862 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c9ad7606-0947-4b3f-89a2-323e60c349aa-var-lib-calico\") pod \"calico-node-r9m24\" (UID: \"c9ad7606-0947-4b3f-89a2-323e60c349aa\") " pod="calico-system/calico-node-r9m24" Oct 27 08:23:49.169720 kubelet[2788]: I1027 08:23:49.168878 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c9ad7606-0947-4b3f-89a2-323e60c349aa-cni-log-dir\") pod \"calico-node-r9m24\" (UID: \"c9ad7606-0947-4b3f-89a2-323e60c349aa\") " pod="calico-system/calico-node-r9m24" Oct 27 08:23:49.169720 kubelet[2788]: I1027 08:23:49.168898 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c9ad7606-0947-4b3f-89a2-323e60c349aa-xtables-lock\") pod \"calico-node-r9m24\" (UID: \"c9ad7606-0947-4b3f-89a2-323e60c349aa\") " pod="calico-system/calico-node-r9m24" Oct 27 08:23:49.188403 containerd[1628]: time="2025-10-27T08:23:49.188347823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66c55bddb7-qj8hw,Uid:f3ab72f5-4059-4da1-a4ad-68da08c1da3d,Namespace:calico-system,Attempt:0,}" Oct 27 08:23:49.263490 containerd[1628]: time="2025-10-27T08:23:49.260725920Z" level=info msg="connecting to shim af8a14bc0e2e3e3f4ca7039594f3ef819cc51faaa8e3b99a6ba44903933a2db8" address="unix:///run/containerd/s/d4fa404aee1e655cb64a3a555b0904a445d1f32aa15dba2c92baa19cf4396423" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:23:49.275779 kubelet[2788]: E1027 08:23:49.275756 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.277002 kubelet[2788]: W1027 08:23:49.276984 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.277105 kubelet[2788]: E1027 08:23:49.277094 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.277752 kubelet[2788]: E1027 08:23:49.277741 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.278481 kubelet[2788]: W1027 08:23:49.278467 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.278566 kubelet[2788]: E1027 08:23:49.278555 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.278714 kubelet[2788]: E1027 08:23:49.278705 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.278841 kubelet[2788]: W1027 08:23:49.278830 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.278907 kubelet[2788]: E1027 08:23:49.278898 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.280542 kubelet[2788]: E1027 08:23:49.280509 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.280770 kubelet[2788]: W1027 08:23:49.280688 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.280770 kubelet[2788]: E1027 08:23:49.280703 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.281516 kubelet[2788]: E1027 08:23:49.281486 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.281758 kubelet[2788]: W1027 08:23:49.281625 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.284312 kubelet[2788]: E1027 08:23:49.283593 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.284312 kubelet[2788]: E1027 08:23:49.284239 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.284312 kubelet[2788]: W1027 08:23:49.284247 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.284312 kubelet[2788]: E1027 08:23:49.284255 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.284762 kubelet[2788]: E1027 08:23:49.284578 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.284762 kubelet[2788]: W1027 08:23:49.284587 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.284762 kubelet[2788]: E1027 08:23:49.284597 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.285355 kubelet[2788]: E1027 08:23:49.285241 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.286495 kubelet[2788]: W1027 08:23:49.285414 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.286495 kubelet[2788]: E1027 08:23:49.285436 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.290120 kubelet[2788]: E1027 08:23:49.290106 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.290222 kubelet[2788]: W1027 08:23:49.290213 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.290288 kubelet[2788]: E1027 08:23:49.290280 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.290572 kubelet[2788]: E1027 08:23:49.290563 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.292550 kubelet[2788]: W1027 08:23:49.292535 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.292753 kubelet[2788]: E1027 08:23:49.292743 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.294469 kubelet[2788]: E1027 08:23:49.294438 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.294598 kubelet[2788]: W1027 08:23:49.294587 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.295046 kubelet[2788]: E1027 08:23:49.294651 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.297518 kubelet[2788]: E1027 08:23:49.296424 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.297518 kubelet[2788]: W1027 08:23:49.297416 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.297518 kubelet[2788]: E1027 08:23:49.297437 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.302139 kubelet[2788]: E1027 08:23:49.301348 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.302139 kubelet[2788]: W1027 08:23:49.301359 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.302139 kubelet[2788]: E1027 08:23:49.301371 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.302139 kubelet[2788]: E1027 08:23:49.302054 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.302139 kubelet[2788]: W1027 08:23:49.302064 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.302139 kubelet[2788]: E1027 08:23:49.302075 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.329393 kubelet[2788]: E1027 08:23:49.329366 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s6rbz" podUID="1b761e29-b614-4041-93ad-3a2beca6983c" Oct 27 08:23:49.335176 kubelet[2788]: E1027 08:23:49.335156 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.335420 kubelet[2788]: W1027 08:23:49.335407 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.335525 kubelet[2788]: E1027 08:23:49.335515 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.352576 systemd[1]: Started cri-containerd-af8a14bc0e2e3e3f4ca7039594f3ef819cc51faaa8e3b99a6ba44903933a2db8.scope - libcontainer container af8a14bc0e2e3e3f4ca7039594f3ef819cc51faaa8e3b99a6ba44903933a2db8. Oct 27 08:23:49.358077 kubelet[2788]: E1027 08:23:49.358038 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.358250 kubelet[2788]: W1027 08:23:49.358055 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.358250 kubelet[2788]: E1027 08:23:49.358193 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.358565 kubelet[2788]: E1027 08:23:49.358538 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.358653 kubelet[2788]: W1027 08:23:49.358612 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.358653 kubelet[2788]: E1027 08:23:49.358627 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.359045 kubelet[2788]: E1027 08:23:49.359004 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.359045 kubelet[2788]: W1027 08:23:49.359013 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.359045 kubelet[2788]: E1027 08:23:49.359021 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.359499 kubelet[2788]: E1027 08:23:49.359403 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.359499 kubelet[2788]: W1027 08:23:49.359412 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.359499 kubelet[2788]: E1027 08:23:49.359420 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.361482 kubelet[2788]: E1027 08:23:49.360594 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.361589 kubelet[2788]: W1027 08:23:49.361545 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.361589 kubelet[2788]: E1027 08:23:49.361560 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.361828 kubelet[2788]: E1027 08:23:49.361791 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.361923 kubelet[2788]: W1027 08:23:49.361815 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.361923 kubelet[2788]: E1027 08:23:49.361893 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.362219 kubelet[2788]: E1027 08:23:49.362142 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.362219 kubelet[2788]: W1027 08:23:49.362150 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.362219 kubelet[2788]: E1027 08:23:49.362158 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.362386 kubelet[2788]: E1027 08:23:49.362339 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.362386 kubelet[2788]: W1027 08:23:49.362348 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.362386 kubelet[2788]: E1027 08:23:49.362355 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.362703 kubelet[2788]: E1027 08:23:49.362635 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.362703 kubelet[2788]: W1027 08:23:49.362644 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.362703 kubelet[2788]: E1027 08:23:49.362651 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.363019 kubelet[2788]: E1027 08:23:49.362970 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.363019 kubelet[2788]: W1027 08:23:49.362979 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.363019 kubelet[2788]: E1027 08:23:49.362987 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.364438 kubelet[2788]: E1027 08:23:49.364231 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.364438 kubelet[2788]: W1027 08:23:49.364240 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.364438 kubelet[2788]: E1027 08:23:49.364248 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.364914 kubelet[2788]: E1027 08:23:49.364887 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.365309 kubelet[2788]: W1027 08:23:49.365179 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.365309 kubelet[2788]: E1027 08:23:49.365192 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.368113 kubelet[2788]: E1027 08:23:49.367714 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.368113 kubelet[2788]: W1027 08:23:49.367725 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.368113 kubelet[2788]: E1027 08:23:49.367733 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.368439 kubelet[2788]: E1027 08:23:49.368429 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.368579 kubelet[2788]: W1027 08:23:49.368510 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.368579 kubelet[2788]: E1027 08:23:49.368522 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.368918 kubelet[2788]: E1027 08:23:49.368677 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.368996 kubelet[2788]: W1027 08:23:49.368961 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.368996 kubelet[2788]: E1027 08:23:49.368974 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.369437 kubelet[2788]: E1027 08:23:49.369385 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.369437 kubelet[2788]: W1027 08:23:49.369394 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.369437 kubelet[2788]: E1027 08:23:49.369402 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.369726 kubelet[2788]: E1027 08:23:49.369675 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.369726 kubelet[2788]: W1027 08:23:49.369685 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.369726 kubelet[2788]: E1027 08:23:49.369692 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.370072 kubelet[2788]: E1027 08:23:49.369996 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.370072 kubelet[2788]: W1027 08:23:49.370005 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.370072 kubelet[2788]: E1027 08:23:49.370013 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.370566 kubelet[2788]: E1027 08:23:49.370550 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.370643 kubelet[2788]: W1027 08:23:49.370607 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.370643 kubelet[2788]: E1027 08:23:49.370617 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.370860 kubelet[2788]: E1027 08:23:49.370831 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.370860 kubelet[2788]: W1027 08:23:49.370840 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.370860 kubelet[2788]: E1027 08:23:49.370847 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.371288 kubelet[2788]: E1027 08:23:49.371252 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.371465 kubelet[2788]: W1027 08:23:49.371436 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.371542 kubelet[2788]: E1027 08:23:49.371521 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.372056 kubelet[2788]: I1027 08:23:49.372042 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1b761e29-b614-4041-93ad-3a2beca6983c-kubelet-dir\") pod \"csi-node-driver-s6rbz\" (UID: \"1b761e29-b614-4041-93ad-3a2beca6983c\") " pod="calico-system/csi-node-driver-s6rbz" Oct 27 08:23:49.372361 kubelet[2788]: E1027 08:23:49.372337 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.372361 kubelet[2788]: W1027 08:23:49.372345 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.372361 kubelet[2788]: E1027 08:23:49.372352 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.372657 kubelet[2788]: E1027 08:23:49.372647 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.372775 kubelet[2788]: W1027 08:23:49.372712 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.372775 kubelet[2788]: E1027 08:23:49.372739 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.373203 kubelet[2788]: E1027 08:23:49.373161 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.373203 kubelet[2788]: W1027 08:23:49.373170 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.373203 kubelet[2788]: E1027 08:23:49.373178 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.373548 kubelet[2788]: I1027 08:23:49.373498 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw72g\" (UniqueName: \"kubernetes.io/projected/1b761e29-b614-4041-93ad-3a2beca6983c-kube-api-access-mw72g\") pod \"csi-node-driver-s6rbz\" (UID: \"1b761e29-b614-4041-93ad-3a2beca6983c\") " pod="calico-system/csi-node-driver-s6rbz" Oct 27 08:23:49.373861 kubelet[2788]: E1027 08:23:49.373811 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.373861 kubelet[2788]: W1027 08:23:49.373821 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.373861 kubelet[2788]: E1027 08:23:49.373829 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.374291 kubelet[2788]: I1027 08:23:49.374126 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1b761e29-b614-4041-93ad-3a2beca6983c-registration-dir\") pod \"csi-node-driver-s6rbz\" (UID: \"1b761e29-b614-4041-93ad-3a2beca6983c\") " pod="calico-system/csi-node-driver-s6rbz" Oct 27 08:23:49.374767 kubelet[2788]: E1027 08:23:49.374671 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.374767 kubelet[2788]: W1027 08:23:49.374681 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.374767 kubelet[2788]: E1027 08:23:49.374689 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.375634 kubelet[2788]: E1027 08:23:49.375589 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.375634 kubelet[2788]: W1027 08:23:49.375598 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.375634 kubelet[2788]: E1027 08:23:49.375606 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.376407 kubelet[2788]: E1027 08:23:49.376322 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.376407 kubelet[2788]: W1027 08:23:49.376331 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.376407 kubelet[2788]: E1027 08:23:49.376339 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.376812 kubelet[2788]: I1027 08:23:49.376709 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1b761e29-b614-4041-93ad-3a2beca6983c-varrun\") pod \"csi-node-driver-s6rbz\" (UID: \"1b761e29-b614-4041-93ad-3a2beca6983c\") " pod="calico-system/csi-node-driver-s6rbz" Oct 27 08:23:49.377687 kubelet[2788]: E1027 08:23:49.377676 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.377791 kubelet[2788]: W1027 08:23:49.377707 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.377791 kubelet[2788]: E1027 08:23:49.377717 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.377996 kubelet[2788]: E1027 08:23:49.377968 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.377996 kubelet[2788]: W1027 08:23:49.377978 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.378111 kubelet[2788]: E1027 08:23:49.378067 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.378306 kubelet[2788]: E1027 08:23:49.378290 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.378384 kubelet[2788]: W1027 08:23:49.378344 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.378384 kubelet[2788]: E1027 08:23:49.378354 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.378600 kubelet[2788]: I1027 08:23:49.378503 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1b761e29-b614-4041-93ad-3a2beca6983c-socket-dir\") pod \"csi-node-driver-s6rbz\" (UID: \"1b761e29-b614-4041-93ad-3a2beca6983c\") " pod="calico-system/csi-node-driver-s6rbz" Oct 27 08:23:49.379192 kubelet[2788]: E1027 08:23:49.379062 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.379192 kubelet[2788]: W1027 08:23:49.379079 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.379192 kubelet[2788]: E1027 08:23:49.379088 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.379791 kubelet[2788]: E1027 08:23:49.379779 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.379938 kubelet[2788]: W1027 08:23:49.379844 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.379938 kubelet[2788]: E1027 08:23:49.379854 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.380174 kubelet[2788]: E1027 08:23:49.380132 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.380343 kubelet[2788]: W1027 08:23:49.380305 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.380343 kubelet[2788]: E1027 08:23:49.380317 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.380796 kubelet[2788]: E1027 08:23:49.380787 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.380863 kubelet[2788]: W1027 08:23:49.380839 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.380863 kubelet[2788]: E1027 08:23:49.380849 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.435634 containerd[1628]: time="2025-10-27T08:23:49.434543920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r9m24,Uid:c9ad7606-0947-4b3f-89a2-323e60c349aa,Namespace:calico-system,Attempt:0,}" Oct 27 08:23:49.466300 containerd[1628]: time="2025-10-27T08:23:49.466260974Z" level=info msg="connecting to shim 17690ee6d654a31fddac0b195adedeac22b0cecc56c5ffe1d30b928b8919cd80" address="unix:///run/containerd/s/7b42f27709741afb03d6242558ff4d9eb23fb7efc8d1b9c0f5aaa5850c8f747c" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:23:49.484702 kubelet[2788]: E1027 08:23:49.484681 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.484878 kubelet[2788]: W1027 08:23:49.484863 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.485083 kubelet[2788]: E1027 08:23:49.485066 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.485507 kubelet[2788]: E1027 08:23:49.485497 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.485967 kubelet[2788]: W1027 08:23:49.485950 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.486130 kubelet[2788]: E1027 08:23:49.486041 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.486397 kubelet[2788]: E1027 08:23:49.486364 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.486397 kubelet[2788]: W1027 08:23:49.486374 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.486397 kubelet[2788]: E1027 08:23:49.486384 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.486873 kubelet[2788]: E1027 08:23:49.486751 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.486998 kubelet[2788]: W1027 08:23:49.486762 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.486998 kubelet[2788]: E1027 08:23:49.486961 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.487475 kubelet[2788]: E1027 08:23:49.487433 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.487743 kubelet[2788]: W1027 08:23:49.487555 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.487743 kubelet[2788]: E1027 08:23:49.487570 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.487966 kubelet[2788]: E1027 08:23:49.487903 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.487966 kubelet[2788]: W1027 08:23:49.487913 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.487966 kubelet[2788]: E1027 08:23:49.487950 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.488481 kubelet[2788]: E1027 08:23:49.488404 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.488481 kubelet[2788]: W1027 08:23:49.488415 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.488481 kubelet[2788]: E1027 08:23:49.488425 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.489137 kubelet[2788]: E1027 08:23:49.488977 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.489137 kubelet[2788]: W1027 08:23:49.488987 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.489137 kubelet[2788]: E1027 08:23:49.488997 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.489346 kubelet[2788]: E1027 08:23:49.489309 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.489346 kubelet[2788]: W1027 08:23:49.489318 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.489346 kubelet[2788]: E1027 08:23:49.489327 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.489684 kubelet[2788]: E1027 08:23:49.489675 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.489825 kubelet[2788]: W1027 08:23:49.489744 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.489825 kubelet[2788]: E1027 08:23:49.489757 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.490118 kubelet[2788]: E1027 08:23:49.490092 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.490118 kubelet[2788]: W1027 08:23:49.490102 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.490118 kubelet[2788]: E1027 08:23:49.490110 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.490705 kubelet[2788]: E1027 08:23:49.490620 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.490705 kubelet[2788]: W1027 08:23:49.490639 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.490705 kubelet[2788]: E1027 08:23:49.490648 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.491162 kubelet[2788]: E1027 08:23:49.491104 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.491162 kubelet[2788]: W1027 08:23:49.491126 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.491162 kubelet[2788]: E1027 08:23:49.491138 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.491690 kubelet[2788]: E1027 08:23:49.491591 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.491690 kubelet[2788]: W1027 08:23:49.491600 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.491690 kubelet[2788]: E1027 08:23:49.491608 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.492335 kubelet[2788]: E1027 08:23:49.492111 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.492335 kubelet[2788]: W1027 08:23:49.492121 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.492335 kubelet[2788]: E1027 08:23:49.492130 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.492604 kubelet[2788]: E1027 08:23:49.492504 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.492604 kubelet[2788]: W1027 08:23:49.492514 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.492604 kubelet[2788]: E1027 08:23:49.492521 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.494006 kubelet[2788]: E1027 08:23:49.493834 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.494006 kubelet[2788]: W1027 08:23:49.493848 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.494006 kubelet[2788]: E1027 08:23:49.493861 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.494264 kubelet[2788]: E1027 08:23:49.494171 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.494808 kubelet[2788]: W1027 08:23:49.494482 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.494808 kubelet[2788]: E1027 08:23:49.494502 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.495563 kubelet[2788]: E1027 08:23:49.495146 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.495563 kubelet[2788]: W1027 08:23:49.495156 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.495563 kubelet[2788]: E1027 08:23:49.495165 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.496220 kubelet[2788]: E1027 08:23:49.495972 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.496220 kubelet[2788]: W1027 08:23:49.495983 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.496220 kubelet[2788]: E1027 08:23:49.495992 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.496688 kubelet[2788]: E1027 08:23:49.496537 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.496688 kubelet[2788]: W1027 08:23:49.496562 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.496688 kubelet[2788]: E1027 08:23:49.496574 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.499798 kubelet[2788]: E1027 08:23:49.499120 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.499798 kubelet[2788]: W1027 08:23:49.499133 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.499798 kubelet[2788]: E1027 08:23:49.499144 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.500007 kubelet[2788]: E1027 08:23:49.499966 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.500007 kubelet[2788]: W1027 08:23:49.499975 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.500007 kubelet[2788]: E1027 08:23:49.499982 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.501138 kubelet[2788]: E1027 08:23:49.501023 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.501138 kubelet[2788]: W1027 08:23:49.501037 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.501138 kubelet[2788]: E1027 08:23:49.501049 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.502518 kubelet[2788]: E1027 08:23:49.501472 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.502518 kubelet[2788]: W1027 08:23:49.501489 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.502518 kubelet[2788]: E1027 08:23:49.501498 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.507619 systemd[1]: Started cri-containerd-17690ee6d654a31fddac0b195adedeac22b0cecc56c5ffe1d30b928b8919cd80.scope - libcontainer container 17690ee6d654a31fddac0b195adedeac22b0cecc56c5ffe1d30b928b8919cd80. Oct 27 08:23:49.516939 kubelet[2788]: E1027 08:23:49.516917 2788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 27 08:23:49.517039 kubelet[2788]: W1027 08:23:49.517028 2788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 27 08:23:49.517144 kubelet[2788]: E1027 08:23:49.517113 2788 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 27 08:23:49.547241 containerd[1628]: time="2025-10-27T08:23:49.547212967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r9m24,Uid:c9ad7606-0947-4b3f-89a2-323e60c349aa,Namespace:calico-system,Attempt:0,} returns sandbox id \"17690ee6d654a31fddac0b195adedeac22b0cecc56c5ffe1d30b928b8919cd80\"" Oct 27 08:23:49.556701 containerd[1628]: time="2025-10-27T08:23:49.556658419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 27 08:23:49.566895 containerd[1628]: time="2025-10-27T08:23:49.566822577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66c55bddb7-qj8hw,Uid:f3ab72f5-4059-4da1-a4ad-68da08c1da3d,Namespace:calico-system,Attempt:0,} returns sandbox id \"af8a14bc0e2e3e3f4ca7039594f3ef819cc51faaa8e3b99a6ba44903933a2db8\"" Oct 27 08:23:50.786554 kubelet[2788]: E1027 08:23:50.785792 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s6rbz" podUID="1b761e29-b614-4041-93ad-3a2beca6983c" Oct 27 08:23:51.960195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1523609304.mount: Deactivated successfully. Oct 27 08:23:52.016588 containerd[1628]: time="2025-10-27T08:23:52.016530942Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:23:52.035945 containerd[1628]: time="2025-10-27T08:23:52.017370496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Oct 27 08:23:52.035945 containerd[1628]: time="2025-10-27T08:23:52.018566861Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:23:52.036199 containerd[1628]: time="2025-10-27T08:23:52.020265401Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.462774684s" Oct 27 08:23:52.036199 containerd[1628]: time="2025-10-27T08:23:52.036036092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 27 08:23:52.036421 containerd[1628]: time="2025-10-27T08:23:52.036294100Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:23:52.038613 containerd[1628]: time="2025-10-27T08:23:52.037386129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 27 08:23:52.041478 containerd[1628]: time="2025-10-27T08:23:52.041431688Z" level=info msg="CreateContainer within sandbox \"17690ee6d654a31fddac0b195adedeac22b0cecc56c5ffe1d30b928b8919cd80\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 27 08:23:52.048634 containerd[1628]: time="2025-10-27T08:23:52.048612723Z" level=info msg="Container 5e7ae40efbebb1d2a84dae34724e41466588dd7b8324f90fad90e47ae42445b2: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:23:52.051135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3280549528.mount: Deactivated successfully. Oct 27 08:23:52.086213 containerd[1628]: time="2025-10-27T08:23:52.086176727Z" level=info msg="CreateContainer within sandbox \"17690ee6d654a31fddac0b195adedeac22b0cecc56c5ffe1d30b928b8919cd80\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5e7ae40efbebb1d2a84dae34724e41466588dd7b8324f90fad90e47ae42445b2\"" Oct 27 08:23:52.086850 containerd[1628]: time="2025-10-27T08:23:52.086835750Z" level=info msg="StartContainer for \"5e7ae40efbebb1d2a84dae34724e41466588dd7b8324f90fad90e47ae42445b2\"" Oct 27 08:23:52.088466 containerd[1628]: time="2025-10-27T08:23:52.088011079Z" level=info msg="connecting to shim 5e7ae40efbebb1d2a84dae34724e41466588dd7b8324f90fad90e47ae42445b2" address="unix:///run/containerd/s/7b42f27709741afb03d6242558ff4d9eb23fb7efc8d1b9c0f5aaa5850c8f747c" protocol=ttrpc version=3 Oct 27 08:23:52.106573 systemd[1]: Started cri-containerd-5e7ae40efbebb1d2a84dae34724e41466588dd7b8324f90fad90e47ae42445b2.scope - libcontainer container 5e7ae40efbebb1d2a84dae34724e41466588dd7b8324f90fad90e47ae42445b2. Oct 27 08:23:52.139506 containerd[1628]: time="2025-10-27T08:23:52.139410665Z" level=info msg="StartContainer for \"5e7ae40efbebb1d2a84dae34724e41466588dd7b8324f90fad90e47ae42445b2\" returns successfully" Oct 27 08:23:52.147016 systemd[1]: cri-containerd-5e7ae40efbebb1d2a84dae34724e41466588dd7b8324f90fad90e47ae42445b2.scope: Deactivated successfully. Oct 27 08:23:52.166472 containerd[1628]: time="2025-10-27T08:23:52.166347158Z" level=info msg="received exit event container_id:\"5e7ae40efbebb1d2a84dae34724e41466588dd7b8324f90fad90e47ae42445b2\" id:\"5e7ae40efbebb1d2a84dae34724e41466588dd7b8324f90fad90e47ae42445b2\" pid:3428 exited_at:{seconds:1761553432 nanos:150097352}" Oct 27 08:23:52.174664 containerd[1628]: time="2025-10-27T08:23:52.174627657Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5e7ae40efbebb1d2a84dae34724e41466588dd7b8324f90fad90e47ae42445b2\" id:\"5e7ae40efbebb1d2a84dae34724e41466588dd7b8324f90fad90e47ae42445b2\" pid:3428 exited_at:{seconds:1761553432 nanos:150097352}" Oct 27 08:23:52.785557 kubelet[2788]: E1027 08:23:52.785442 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s6rbz" podUID="1b761e29-b614-4041-93ad-3a2beca6983c" Oct 27 08:23:52.938282 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e7ae40efbebb1d2a84dae34724e41466588dd7b8324f90fad90e47ae42445b2-rootfs.mount: Deactivated successfully. Oct 27 08:23:54.786702 kubelet[2788]: E1027 08:23:54.786495 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s6rbz" podUID="1b761e29-b614-4041-93ad-3a2beca6983c" Oct 27 08:23:54.895784 containerd[1628]: time="2025-10-27T08:23:54.895723919Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:23:54.896860 containerd[1628]: time="2025-10-27T08:23:54.896705466Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33739890" Oct 27 08:23:54.898039 containerd[1628]: time="2025-10-27T08:23:54.897996149Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:23:54.900276 containerd[1628]: time="2025-10-27T08:23:54.900240961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:23:54.900972 containerd[1628]: time="2025-10-27T08:23:54.900700288Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.8622638s" Oct 27 08:23:54.900972 containerd[1628]: time="2025-10-27T08:23:54.900724789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 27 08:23:54.901705 containerd[1628]: time="2025-10-27T08:23:54.901687096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 27 08:23:54.917364 containerd[1628]: time="2025-10-27T08:23:54.917324609Z" level=info msg="CreateContainer within sandbox \"af8a14bc0e2e3e3f4ca7039594f3ef819cc51faaa8e3b99a6ba44903933a2db8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 27 08:23:54.926895 containerd[1628]: time="2025-10-27T08:23:54.925159114Z" level=info msg="Container b577cb3bcd905e187139857e1d172bb16e90bbfd766dc923b647f4fba2cfe2b1: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:23:54.932840 containerd[1628]: time="2025-10-27T08:23:54.932800909Z" level=info msg="CreateContainer within sandbox \"af8a14bc0e2e3e3f4ca7039594f3ef819cc51faaa8e3b99a6ba44903933a2db8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b577cb3bcd905e187139857e1d172bb16e90bbfd766dc923b647f4fba2cfe2b1\"" Oct 27 08:23:54.933359 containerd[1628]: time="2025-10-27T08:23:54.933313670Z" level=info msg="StartContainer for \"b577cb3bcd905e187139857e1d172bb16e90bbfd766dc923b647f4fba2cfe2b1\"" Oct 27 08:23:54.934627 containerd[1628]: time="2025-10-27T08:23:54.934605426Z" level=info msg="connecting to shim b577cb3bcd905e187139857e1d172bb16e90bbfd766dc923b647f4fba2cfe2b1" address="unix:///run/containerd/s/d4fa404aee1e655cb64a3a555b0904a445d1f32aa15dba2c92baa19cf4396423" protocol=ttrpc version=3 Oct 27 08:23:54.958719 systemd[1]: Started cri-containerd-b577cb3bcd905e187139857e1d172bb16e90bbfd766dc923b647f4fba2cfe2b1.scope - libcontainer container b577cb3bcd905e187139857e1d172bb16e90bbfd766dc923b647f4fba2cfe2b1. Oct 27 08:23:55.009621 containerd[1628]: time="2025-10-27T08:23:55.009568872Z" level=info msg="StartContainer for \"b577cb3bcd905e187139857e1d172bb16e90bbfd766dc923b647f4fba2cfe2b1\" returns successfully" Oct 27 08:23:55.953318 kubelet[2788]: I1027 08:23:55.953166 2788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-66c55bddb7-qj8hw" podStartSLOduration=2.621669741 podStartE2EDuration="7.953145232s" podCreationTimestamp="2025-10-27 08:23:48 +0000 UTC" firstStartedPulling="2025-10-27 08:23:49.569984831 +0000 UTC m=+28.904722169" lastFinishedPulling="2025-10-27 08:23:54.901460323 +0000 UTC m=+34.236197660" observedRunningTime="2025-10-27 08:23:55.952381681 +0000 UTC m=+35.287119018" watchObservedRunningTime="2025-10-27 08:23:55.953145232 +0000 UTC m=+35.287882609" Oct 27 08:23:56.786491 kubelet[2788]: E1027 08:23:56.786220 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s6rbz" podUID="1b761e29-b614-4041-93ad-3a2beca6983c" Oct 27 08:23:56.932761 kubelet[2788]: I1027 08:23:56.932701 2788 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 27 08:23:58.787110 kubelet[2788]: E1027 08:23:58.786915 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s6rbz" podUID="1b761e29-b614-4041-93ad-3a2beca6983c" Oct 27 08:23:58.992554 containerd[1628]: time="2025-10-27T08:23:58.992503407Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:23:58.993726 containerd[1628]: time="2025-10-27T08:23:58.993612271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 27 08:23:58.994923 containerd[1628]: time="2025-10-27T08:23:58.994897629Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:23:58.996973 containerd[1628]: time="2025-10-27T08:23:58.996951048Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:23:58.997519 containerd[1628]: time="2025-10-27T08:23:58.997305833Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.095448706s" Oct 27 08:23:58.997519 containerd[1628]: time="2025-10-27T08:23:58.997339752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 27 08:23:59.006547 containerd[1628]: time="2025-10-27T08:23:59.006520294Z" level=info msg="CreateContainer within sandbox \"17690ee6d654a31fddac0b195adedeac22b0cecc56c5ffe1d30b928b8919cd80\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 27 08:23:59.020466 containerd[1628]: time="2025-10-27T08:23:59.015116183Z" level=info msg="Container 060151df8a058f7a94e89c52294cd286df93895458cc1dcc019738aa0c3b8cff: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:23:59.031180 containerd[1628]: time="2025-10-27T08:23:59.031145802Z" level=info msg="CreateContainer within sandbox \"17690ee6d654a31fddac0b195adedeac22b0cecc56c5ffe1d30b928b8919cd80\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"060151df8a058f7a94e89c52294cd286df93895458cc1dcc019738aa0c3b8cff\"" Oct 27 08:23:59.034546 containerd[1628]: time="2025-10-27T08:23:59.034503408Z" level=info msg="StartContainer for \"060151df8a058f7a94e89c52294cd286df93895458cc1dcc019738aa0c3b8cff\"" Oct 27 08:23:59.040117 containerd[1628]: time="2025-10-27T08:23:59.040009111Z" level=info msg="connecting to shim 060151df8a058f7a94e89c52294cd286df93895458cc1dcc019738aa0c3b8cff" address="unix:///run/containerd/s/7b42f27709741afb03d6242558ff4d9eb23fb7efc8d1b9c0f5aaa5850c8f747c" protocol=ttrpc version=3 Oct 27 08:23:59.063591 systemd[1]: Started cri-containerd-060151df8a058f7a94e89c52294cd286df93895458cc1dcc019738aa0c3b8cff.scope - libcontainer container 060151df8a058f7a94e89c52294cd286df93895458cc1dcc019738aa0c3b8cff. Oct 27 08:23:59.129258 containerd[1628]: time="2025-10-27T08:23:59.129198258Z" level=info msg="StartContainer for \"060151df8a058f7a94e89c52294cd286df93895458cc1dcc019738aa0c3b8cff\" returns successfully" Oct 27 08:23:59.601038 systemd[1]: cri-containerd-060151df8a058f7a94e89c52294cd286df93895458cc1dcc019738aa0c3b8cff.scope: Deactivated successfully. Oct 27 08:23:59.601267 systemd[1]: cri-containerd-060151df8a058f7a94e89c52294cd286df93895458cc1dcc019738aa0c3b8cff.scope: Consumed 485ms CPU time, 165.1M memory peak, 8.9M read from disk, 171.3M written to disk. Oct 27 08:23:59.635207 containerd[1628]: time="2025-10-27T08:23:59.635159109Z" level=info msg="TaskExit event in podsandbox handler container_id:\"060151df8a058f7a94e89c52294cd286df93895458cc1dcc019738aa0c3b8cff\" id:\"060151df8a058f7a94e89c52294cd286df93895458cc1dcc019738aa0c3b8cff\" pid:3527 exited_at:{seconds:1761553439 nanos:617965918}" Oct 27 08:23:59.635353 containerd[1628]: time="2025-10-27T08:23:59.635228542Z" level=info msg="received exit event container_id:\"060151df8a058f7a94e89c52294cd286df93895458cc1dcc019738aa0c3b8cff\" id:\"060151df8a058f7a94e89c52294cd286df93895458cc1dcc019738aa0c3b8cff\" pid:3527 exited_at:{seconds:1761553439 nanos:617965918}" Oct 27 08:23:59.689350 kubelet[2788]: I1027 08:23:59.687700 2788 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 27 08:23:59.755892 systemd[1]: Created slice kubepods-burstable-podd04f9940_0e64_4166_93cf_749a47710fc1.slice - libcontainer container kubepods-burstable-podd04f9940_0e64_4166_93cf_749a47710fc1.slice. Oct 27 08:23:59.758511 kubelet[2788]: I1027 08:23:59.758470 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls725\" (UniqueName: \"kubernetes.io/projected/d04f9940-0e64-4166-93cf-749a47710fc1-kube-api-access-ls725\") pod \"coredns-66bc5c9577-tphb5\" (UID: \"d04f9940-0e64-4166-93cf-749a47710fc1\") " pod="kube-system/coredns-66bc5c9577-tphb5" Oct 27 08:23:59.759481 kubelet[2788]: I1027 08:23:59.759447 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d04f9940-0e64-4166-93cf-749a47710fc1-config-volume\") pod \"coredns-66bc5c9577-tphb5\" (UID: \"d04f9940-0e64-4166-93cf-749a47710fc1\") " pod="kube-system/coredns-66bc5c9577-tphb5" Oct 27 08:23:59.767035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-060151df8a058f7a94e89c52294cd286df93895458cc1dcc019738aa0c3b8cff-rootfs.mount: Deactivated successfully. Oct 27 08:23:59.776924 systemd[1]: Created slice kubepods-besteffort-pod1bcea6e5_3c39_41c9_92bc_ee324a63b0a8.slice - libcontainer container kubepods-besteffort-pod1bcea6e5_3c39_41c9_92bc_ee324a63b0a8.slice. Oct 27 08:23:59.784050 systemd[1]: Created slice kubepods-burstable-pod53ab1dbd_3950_4a90_ad09_9df752a49a33.slice - libcontainer container kubepods-burstable-pod53ab1dbd_3950_4a90_ad09_9df752a49a33.slice. Oct 27 08:23:59.801945 systemd[1]: Created slice kubepods-besteffort-pode5f8aee0_010a_43df_b3cc_29e7716b4073.slice - libcontainer container kubepods-besteffort-pode5f8aee0_010a_43df_b3cc_29e7716b4073.slice. Oct 27 08:23:59.818041 systemd[1]: Created slice kubepods-besteffort-pod96a60c22_8a13_49d1_8749_b73cb7e464a7.slice - libcontainer container kubepods-besteffort-pod96a60c22_8a13_49d1_8749_b73cb7e464a7.slice. Oct 27 08:23:59.825086 systemd[1]: Created slice kubepods-besteffort-pod79766d3c_55af_44b2_853b_a76f9b90d865.slice - libcontainer container kubepods-besteffort-pod79766d3c_55af_44b2_853b_a76f9b90d865.slice. Oct 27 08:23:59.836205 systemd[1]: Created slice kubepods-besteffort-pod19247ceb_194d_4562_847c_8010afd7e20d.slice - libcontainer container kubepods-besteffort-pod19247ceb_194d_4562_847c_8010afd7e20d.slice. Oct 27 08:23:59.861483 kubelet[2788]: I1027 08:23:59.860506 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/96a60c22-8a13-49d1-8749-b73cb7e464a7-calico-apiserver-certs\") pod \"calico-apiserver-68d8c5c9bc-jsc7m\" (UID: \"96a60c22-8a13-49d1-8749-b73cb7e464a7\") " pod="calico-apiserver/calico-apiserver-68d8c5c9bc-jsc7m" Oct 27 08:23:59.861483 kubelet[2788]: I1027 08:23:59.860559 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j69cz\" (UniqueName: \"kubernetes.io/projected/19247ceb-194d-4562-847c-8010afd7e20d-kube-api-access-j69cz\") pod \"whisker-6fcdc8b4cb-vjzwz\" (UID: \"19247ceb-194d-4562-847c-8010afd7e20d\") " pod="calico-system/whisker-6fcdc8b4cb-vjzwz" Oct 27 08:23:59.861483 kubelet[2788]: I1027 08:23:59.860575 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5m2q\" (UniqueName: \"kubernetes.io/projected/79766d3c-55af-44b2-853b-a76f9b90d865-kube-api-access-v5m2q\") pod \"goldmane-7c778bb748-wd8vm\" (UID: \"79766d3c-55af-44b2-853b-a76f9b90d865\") " pod="calico-system/goldmane-7c778bb748-wd8vm" Oct 27 08:23:59.861483 kubelet[2788]: I1027 08:23:59.860637 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1bcea6e5-3c39-41c9-92bc-ee324a63b0a8-calico-apiserver-certs\") pod \"calico-apiserver-68d8c5c9bc-f56tm\" (UID: \"1bcea6e5-3c39-41c9-92bc-ee324a63b0a8\") " pod="calico-apiserver/calico-apiserver-68d8c5c9bc-f56tm" Oct 27 08:23:59.861483 kubelet[2788]: I1027 08:23:59.860664 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7hnm\" (UniqueName: \"kubernetes.io/projected/e5f8aee0-010a-43df-b3cc-29e7716b4073-kube-api-access-m7hnm\") pod \"calico-kube-controllers-74d68549b8-grhgf\" (UID: \"e5f8aee0-010a-43df-b3cc-29e7716b4073\") " pod="calico-system/calico-kube-controllers-74d68549b8-grhgf" Oct 27 08:23:59.863220 kubelet[2788]: I1027 08:23:59.860692 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/79766d3c-55af-44b2-853b-a76f9b90d865-goldmane-key-pair\") pod \"goldmane-7c778bb748-wd8vm\" (UID: \"79766d3c-55af-44b2-853b-a76f9b90d865\") " pod="calico-system/goldmane-7c778bb748-wd8vm" Oct 27 08:23:59.863220 kubelet[2788]: I1027 08:23:59.860752 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5f8aee0-010a-43df-b3cc-29e7716b4073-tigera-ca-bundle\") pod \"calico-kube-controllers-74d68549b8-grhgf\" (UID: \"e5f8aee0-010a-43df-b3cc-29e7716b4073\") " pod="calico-system/calico-kube-controllers-74d68549b8-grhgf" Oct 27 08:23:59.863220 kubelet[2788]: I1027 08:23:59.860781 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d9rr\" (UniqueName: \"kubernetes.io/projected/96a60c22-8a13-49d1-8749-b73cb7e464a7-kube-api-access-2d9rr\") pod \"calico-apiserver-68d8c5c9bc-jsc7m\" (UID: \"96a60c22-8a13-49d1-8749-b73cb7e464a7\") " pod="calico-apiserver/calico-apiserver-68d8c5c9bc-jsc7m" Oct 27 08:23:59.863220 kubelet[2788]: I1027 08:23:59.860850 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/19247ceb-194d-4562-847c-8010afd7e20d-whisker-backend-key-pair\") pod \"whisker-6fcdc8b4cb-vjzwz\" (UID: \"19247ceb-194d-4562-847c-8010afd7e20d\") " pod="calico-system/whisker-6fcdc8b4cb-vjzwz" Oct 27 08:23:59.863220 kubelet[2788]: I1027 08:23:59.860865 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlvx4\" (UniqueName: \"kubernetes.io/projected/53ab1dbd-3950-4a90-ad09-9df752a49a33-kube-api-access-jlvx4\") pod \"coredns-66bc5c9577-lwwn5\" (UID: \"53ab1dbd-3950-4a90-ad09-9df752a49a33\") " pod="kube-system/coredns-66bc5c9577-lwwn5" Oct 27 08:23:59.863373 kubelet[2788]: I1027 08:23:59.861027 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8qtx\" (UniqueName: \"kubernetes.io/projected/1bcea6e5-3c39-41c9-92bc-ee324a63b0a8-kube-api-access-j8qtx\") pod \"calico-apiserver-68d8c5c9bc-f56tm\" (UID: \"1bcea6e5-3c39-41c9-92bc-ee324a63b0a8\") " pod="calico-apiserver/calico-apiserver-68d8c5c9bc-f56tm" Oct 27 08:23:59.863373 kubelet[2788]: I1027 08:23:59.861112 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19247ceb-194d-4562-847c-8010afd7e20d-whisker-ca-bundle\") pod \"whisker-6fcdc8b4cb-vjzwz\" (UID: \"19247ceb-194d-4562-847c-8010afd7e20d\") " pod="calico-system/whisker-6fcdc8b4cb-vjzwz" Oct 27 08:23:59.863373 kubelet[2788]: I1027 08:23:59.861131 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53ab1dbd-3950-4a90-ad09-9df752a49a33-config-volume\") pod \"coredns-66bc5c9577-lwwn5\" (UID: \"53ab1dbd-3950-4a90-ad09-9df752a49a33\") " pod="kube-system/coredns-66bc5c9577-lwwn5" Oct 27 08:23:59.863373 kubelet[2788]: I1027 08:23:59.861186 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79766d3c-55af-44b2-853b-a76f9b90d865-config\") pod \"goldmane-7c778bb748-wd8vm\" (UID: \"79766d3c-55af-44b2-853b-a76f9b90d865\") " pod="calico-system/goldmane-7c778bb748-wd8vm" Oct 27 08:23:59.863373 kubelet[2788]: I1027 08:23:59.861220 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79766d3c-55af-44b2-853b-a76f9b90d865-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-wd8vm\" (UID: \"79766d3c-55af-44b2-853b-a76f9b90d865\") " pod="calico-system/goldmane-7c778bb748-wd8vm" Oct 27 08:23:59.999476 containerd[1628]: time="2025-10-27T08:23:59.999061246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 27 08:24:00.087293 containerd[1628]: time="2025-10-27T08:24:00.087231214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tphb5,Uid:d04f9940-0e64-4166-93cf-749a47710fc1,Namespace:kube-system,Attempt:0,}" Oct 27 08:24:00.113771 containerd[1628]: time="2025-10-27T08:24:00.113621341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68d8c5c9bc-f56tm,Uid:1bcea6e5-3c39-41c9-92bc-ee324a63b0a8,Namespace:calico-apiserver,Attempt:0,}" Oct 27 08:24:00.142663 containerd[1628]: time="2025-10-27T08:24:00.141547982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-wd8vm,Uid:79766d3c-55af-44b2-853b-a76f9b90d865,Namespace:calico-system,Attempt:0,}" Oct 27 08:24:00.142663 containerd[1628]: time="2025-10-27T08:24:00.141968051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74d68549b8-grhgf,Uid:e5f8aee0-010a-43df-b3cc-29e7716b4073,Namespace:calico-system,Attempt:0,}" Oct 27 08:24:00.143497 containerd[1628]: time="2025-10-27T08:24:00.142841568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lwwn5,Uid:53ab1dbd-3950-4a90-ad09-9df752a49a33,Namespace:kube-system,Attempt:0,}" Oct 27 08:24:00.143497 containerd[1628]: time="2025-10-27T08:24:00.142953096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68d8c5c9bc-jsc7m,Uid:96a60c22-8a13-49d1-8749-b73cb7e464a7,Namespace:calico-apiserver,Attempt:0,}" Oct 27 08:24:00.144794 containerd[1628]: time="2025-10-27T08:24:00.144707885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fcdc8b4cb-vjzwz,Uid:19247ceb-194d-4562-847c-8010afd7e20d,Namespace:calico-system,Attempt:0,}" Oct 27 08:24:00.317961 containerd[1628]: time="2025-10-27T08:24:00.317919177Z" level=error msg="Failed to destroy network for sandbox \"63797cf91de258fbd7d3137a944738de6cbef2d8779119eb9749aba0a9441aaa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:24:00.322379 containerd[1628]: time="2025-10-27T08:24:00.322220876Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-wd8vm,Uid:79766d3c-55af-44b2-853b-a76f9b90d865,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"63797cf91de258fbd7d3137a944738de6cbef2d8779119eb9749aba0a9441aaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:24:00.325415 kubelet[2788]: E1027 08:24:00.325205 2788 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63797cf91de258fbd7d3137a944738de6cbef2d8779119eb9749aba0a9441aaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:24:00.325415 kubelet[2788]: E1027 08:24:00.325290 2788 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63797cf91de258fbd7d3137a944738de6cbef2d8779119eb9749aba0a9441aaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-wd8vm" Oct 27 08:24:00.325415 kubelet[2788]: E1027 08:24:00.325308 2788 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63797cf91de258fbd7d3137a944738de6cbef2d8779119eb9749aba0a9441aaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-wd8vm" Oct 27 08:24:00.325658 kubelet[2788]: E1027 08:24:00.325377 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-wd8vm_calico-system(79766d3c-55af-44b2-853b-a76f9b90d865)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-wd8vm_calico-system(79766d3c-55af-44b2-853b-a76f9b90d865)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63797cf91de258fbd7d3137a944738de6cbef2d8779119eb9749aba0a9441aaa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-wd8vm" podUID="79766d3c-55af-44b2-853b-a76f9b90d865" Oct 27 08:24:00.332031 containerd[1628]: time="2025-10-27T08:24:00.331591171Z" level=error msg="Failed to destroy network for sandbox \"d1392e00bb2046e55c64ef3028530f5fef482e77abb6541ff5e592257f22bda3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:24:00.335893 containerd[1628]: time="2025-10-27T08:24:00.335735257Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lwwn5,Uid:53ab1dbd-3950-4a90-ad09-9df752a49a33,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1392e00bb2046e55c64ef3028530f5fef482e77abb6541ff5e592257f22bda3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:24:00.336005 kubelet[2788]: E1027 08:24:00.335949 2788 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1392e00bb2046e55c64ef3028530f5fef482e77abb6541ff5e592257f22bda3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:24:00.336005 kubelet[2788]: E1027 08:24:00.335992 2788 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1392e00bb2046e55c64ef3028530f5fef482e77abb6541ff5e592257f22bda3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-lwwn5" Oct 27 08:24:00.336513 kubelet[2788]: E1027 08:24:00.336011 2788 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1392e00bb2046e55c64ef3028530f5fef482e77abb6541ff5e592257f22bda3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-lwwn5" Oct 27 08:24:00.336513 kubelet[2788]: E1027 08:24:00.336052 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-lwwn5_kube-system(53ab1dbd-3950-4a90-ad09-9df752a49a33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-lwwn5_kube-system(53ab1dbd-3950-4a90-ad09-9df752a49a33)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1392e00bb2046e55c64ef3028530f5fef482e77abb6541ff5e592257f22bda3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-lwwn5" podUID="53ab1dbd-3950-4a90-ad09-9df752a49a33" Oct 27 08:24:00.354670 containerd[1628]: time="2025-10-27T08:24:00.354561705Z" level=error msg="Failed to destroy network for sandbox \"24e1496519c7ce00e8de2d091bea0d34eb982863ed9f10c92f61e7495dab5507\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:24:00.355324 containerd[1628]: time="2025-10-27T08:24:00.355291648Z" level=error msg="Failed to destroy network for sandbox \"678aa5c8156eb4f08d6407f39aea550ef81bed367a7b75859527645c69b7180e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:24:00.356135 containerd[1628]: time="2025-10-27T08:24:00.356101745Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68d8c5c9bc-f56tm,Uid:1bcea6e5-3c39-41c9-92bc-ee324a63b0a8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"24e1496519c7ce00e8de2d091bea0d34eb982863ed9f10c92f61e7495dab5507\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:24:00.356936 kubelet[2788]: E1027 08:24:00.356465 2788 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24e1496519c7ce00e8de2d091bea0d34eb982863ed9f10c92f61e7495dab5507\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:24:00.357040 kubelet[2788]: E1027 08:24:00.356963 2788 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24e1496519c7ce00e8de2d091bea0d34eb982863ed9f10c92f61e7495dab5507\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68d8c5c9bc-f56tm" Oct 27 08:24:00.357040 kubelet[2788]: E1027 08:24:00.357005 2788 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24e1496519c7ce00e8de2d091bea0d34eb982863ed9f10c92f61e7495dab5507\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68d8c5c9bc-f56tm" Oct 27 08:24:00.357203 containerd[1628]: time="2025-10-27T08:24:00.357102342Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tphb5,Uid:d04f9940-0e64-4166-93cf-749a47710fc1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"678aa5c8156eb4f08d6407f39aea550ef81bed367a7b75859527645c69b7180e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:24:00.357845 kubelet[2788]: E1027 08:24:00.357808 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68d8c5c9bc-f56tm_calico-apiserver(1bcea6e5-3c39-41c9-92bc-ee324a63b0a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68d8c5c9bc-f56tm_calico-apiserver(1bcea6e5-3c39-41c9-92bc-ee324a63b0a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24e1496519c7ce00e8de2d091bea0d34eb982863ed9f10c92f61e7495dab5507\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68d8c5c9bc-f56tm" podUID="1bcea6e5-3c39-41c9-92bc-ee324a63b0a8" Oct 27 08:24:00.357944 kubelet[2788]: E1027 08:24:00.357925 2788 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"678aa5c8156eb4f08d6407f39aea550ef81bed367a7b75859527645c69b7180e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:24:00.357978 kubelet[2788]: E1027 08:24:00.357954 2788 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"678aa5c8156eb4f08d6407f39aea550ef81bed367a7b75859527645c69b7180e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-tphb5" Oct 27 08:24:00.357978 kubelet[2788]: E1027 08:24:00.357968 2788 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"678aa5c8156eb4f08d6407f39aea550ef81bed367a7b75859527645c69b7180e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-tphb5" Oct 27 08:24:00.358015 kubelet[2788]: E1027 08:24:00.358001 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-tphb5_kube-system(d04f9940-0e64-4166-93cf-749a47710fc1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-tphb5_kube-system(d04f9940-0e64-4166-93cf-749a47710fc1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"678aa5c8156eb4f08d6407f39aea550ef81bed367a7b75859527645c69b7180e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-tphb5" podUID="d04f9940-0e64-4166-93cf-749a47710fc1" Oct 27 08:24:00.363320 containerd[1628]: time="2025-10-27T08:24:00.363278854Z" level=error msg="Failed to destroy network for sandbox \"3061b9cdcf061c61bad1370aeb004ac97508ab66a006fb41fea69e84ac2c65ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:24:00.364935 containerd[1628]: time="2025-10-27T08:24:00.364850860Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74d68549b8-grhgf,Uid:e5f8aee0-010a-43df-b3cc-29e7716b4073,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3061b9cdcf061c61bad1370aeb004ac97508ab66a006fb41fea69e84ac2c65ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:24:00.365374 kubelet[2788]: E1027 08:24:00.365349 2788 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3061b9cdcf061c61bad1370aeb004ac97508ab66a006fb41fea69e84ac2c65ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:24:00.365798 kubelet[2788]: E1027 08:24:00.365505 2788 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3061b9cdcf061c61bad1370aeb004ac97508ab66a006fb41fea69e84ac2c65ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74d68549b8-grhgf" Oct 27 08:24:00.365798 kubelet[2788]: E1027 08:24:00.365527 2788 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3061b9cdcf061c61bad1370aeb004ac97508ab66a006fb41fea69e84ac2c65ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74d68549b8-grhgf" Oct 27 08:24:00.365798 kubelet[2788]: E1027 08:24:00.365568 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74d68549b8-grhgf_calico-system(e5f8aee0-010a-43df-b3cc-29e7716b4073)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74d68549b8-grhgf_calico-system(e5f8aee0-010a-43df-b3cc-29e7716b4073)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3061b9cdcf061c61bad1370aeb004ac97508ab66a006fb41fea69e84ac2c65ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74d68549b8-grhgf" podUID="e5f8aee0-010a-43df-b3cc-29e7716b4073" Oct 27 08:24:00.374024 containerd[1628]: time="2025-10-27T08:24:00.373965432Z" level=error msg="Failed to destroy network for sandbox \"90a192f744cb9d01369b5810a8dab72baaf99db8d1c064460a378fb00563a18f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:24:00.375337 containerd[1628]: time="2025-10-27T08:24:00.375316606Z" level=error msg="Failed to destroy network for sandbox \"4f342fe1bba0394230beb3f2082d25765057db4081fc9b5bbbc503fbe217d9d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:24:00.375518 containerd[1628]: time="2025-10-27T08:24:00.375345475Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68d8c5c9bc-jsc7m,Uid:96a60c22-8a13-49d1-8749-b73cb7e464a7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"90a192f744cb9d01369b5810a8dab72baaf99db8d1c064460a378fb00563a18f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:24:00.375715 kubelet[2788]: E1027 08:24:00.375670 2788 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90a192f744cb9d01369b5810a8dab72baaf99db8d1c064460a378fb00563a18f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:24:00.375770 kubelet[2788]: E1027 08:24:00.375723 2788 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90a192f744cb9d01369b5810a8dab72baaf99db8d1c064460a378fb00563a18f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68d8c5c9bc-jsc7m" Oct 27 08:24:00.375770 kubelet[2788]: E1027 08:24:00.375740 2788 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90a192f744cb9d01369b5810a8dab72baaf99db8d1c064460a378fb00563a18f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68d8c5c9bc-jsc7m" Oct 27 08:24:00.376309 kubelet[2788]: E1027 08:24:00.375787 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68d8c5c9bc-jsc7m_calico-apiserver(96a60c22-8a13-49d1-8749-b73cb7e464a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68d8c5c9bc-jsc7m_calico-apiserver(96a60c22-8a13-49d1-8749-b73cb7e464a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90a192f744cb9d01369b5810a8dab72baaf99db8d1c064460a378fb00563a18f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68d8c5c9bc-jsc7m" podUID="96a60c22-8a13-49d1-8749-b73cb7e464a7" Oct 27 08:24:00.376882 containerd[1628]: time="2025-10-27T08:24:00.376722492Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fcdc8b4cb-vjzwz,Uid:19247ceb-194d-4562-847c-8010afd7e20d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f342fe1bba0394230beb3f2082d25765057db4081fc9b5bbbc503fbe217d9d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:24:00.376962 kubelet[2788]: E1027 08:24:00.376896 2788 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f342fe1bba0394230beb3f2082d25765057db4081fc9b5bbbc503fbe217d9d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:24:00.376962 kubelet[2788]: E1027 08:24:00.376956 2788 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f342fe1bba0394230beb3f2082d25765057db4081fc9b5bbbc503fbe217d9d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6fcdc8b4cb-vjzwz" Oct 27 08:24:00.377022 kubelet[2788]: E1027 08:24:00.376972 2788 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f342fe1bba0394230beb3f2082d25765057db4081fc9b5bbbc503fbe217d9d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6fcdc8b4cb-vjzwz" Oct 27 08:24:00.377334 kubelet[2788]: E1027 08:24:00.377043 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6fcdc8b4cb-vjzwz_calico-system(19247ceb-194d-4562-847c-8010afd7e20d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6fcdc8b4cb-vjzwz_calico-system(19247ceb-194d-4562-847c-8010afd7e20d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f342fe1bba0394230beb3f2082d25765057db4081fc9b5bbbc503fbe217d9d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6fcdc8b4cb-vjzwz" podUID="19247ceb-194d-4562-847c-8010afd7e20d" Oct 27 08:24:00.801134 systemd[1]: Created slice kubepods-besteffort-pod1b761e29_b614_4041_93ad_3a2beca6983c.slice - libcontainer container kubepods-besteffort-pod1b761e29_b614_4041_93ad_3a2beca6983c.slice. Oct 27 08:24:00.808540 containerd[1628]: time="2025-10-27T08:24:00.808402970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s6rbz,Uid:1b761e29-b614-4041-93ad-3a2beca6983c,Namespace:calico-system,Attempt:0,}" Oct 27 08:24:00.891291 containerd[1628]: time="2025-10-27T08:24:00.891228158Z" level=error msg="Failed to destroy network for sandbox \"dd8e02d195cf3147c86cbc3c14390e0edb2b80fb2ddbfcedba72f2aa3c345ec8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:24:00.892443 containerd[1628]: time="2025-10-27T08:24:00.892411088Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s6rbz,Uid:1b761e29-b614-4041-93ad-3a2beca6983c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd8e02d195cf3147c86cbc3c14390e0edb2b80fb2ddbfcedba72f2aa3c345ec8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:24:00.892730 kubelet[2788]: E1027 08:24:00.892668 2788 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd8e02d195cf3147c86cbc3c14390e0edb2b80fb2ddbfcedba72f2aa3c345ec8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 27 08:24:00.893019 kubelet[2788]: E1027 08:24:00.892744 2788 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd8e02d195cf3147c86cbc3c14390e0edb2b80fb2ddbfcedba72f2aa3c345ec8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s6rbz" Oct 27 08:24:00.893019 kubelet[2788]: E1027 08:24:00.892787 2788 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd8e02d195cf3147c86cbc3c14390e0edb2b80fb2ddbfcedba72f2aa3c345ec8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s6rbz" Oct 27 08:24:00.893019 kubelet[2788]: E1027 08:24:00.892838 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s6rbz_calico-system(1b761e29-b614-4041-93ad-3a2beca6983c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s6rbz_calico-system(1b761e29-b614-4041-93ad-3a2beca6983c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd8e02d195cf3147c86cbc3c14390e0edb2b80fb2ddbfcedba72f2aa3c345ec8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s6rbz" podUID="1b761e29-b614-4041-93ad-3a2beca6983c" Oct 27 08:24:01.019352 systemd[1]: run-netns-cni\x2dffe99798\x2dc34f\x2d5dcb\x2db3e5\x2d92985a3992dd.mount: Deactivated successfully. Oct 27 08:24:01.019525 systemd[1]: run-netns-cni\x2d27ba3ba2\x2d5d88\x2d1b46\x2d1828\x2d664b41c76d47.mount: Deactivated successfully. Oct 27 08:24:01.019643 systemd[1]: run-netns-cni\x2d4d6e48ae\x2deabe\x2d44cd\x2d083c\x2dddda15bec406.mount: Deactivated successfully. Oct 27 08:24:01.019725 systemd[1]: run-netns-cni\x2d25949959\x2d26cb\x2db30b\x2d02d8\x2db8d2b412508c.mount: Deactivated successfully. Oct 27 08:24:08.608833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2607646867.mount: Deactivated successfully. Oct 27 08:24:08.648608 containerd[1628]: time="2025-10-27T08:24:08.647807145Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:24:08.650105 containerd[1628]: time="2025-10-27T08:24:08.650083987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 27 08:24:08.654217 containerd[1628]: time="2025-10-27T08:24:08.654169423Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:24:08.655597 containerd[1628]: time="2025-10-27T08:24:08.654744168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 08:24:08.655597 containerd[1628]: time="2025-10-27T08:24:08.655500062Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.656342333s" Oct 27 08:24:08.655597 containerd[1628]: time="2025-10-27T08:24:08.655523018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 27 08:24:08.679901 containerd[1628]: time="2025-10-27T08:24:08.679850686Z" level=info msg="CreateContainer within sandbox \"17690ee6d654a31fddac0b195adedeac22b0cecc56c5ffe1d30b928b8919cd80\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 27 08:24:08.713752 containerd[1628]: time="2025-10-27T08:24:08.713643715Z" level=info msg="Container 1750137e5f1a22acdfcb9b9de649a58767d8e5a42970d60a2b5b3675d6bfc4b5: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:24:08.714683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3240868832.mount: Deactivated successfully. Oct 27 08:24:08.769843 containerd[1628]: time="2025-10-27T08:24:08.769782446Z" level=info msg="CreateContainer within sandbox \"17690ee6d654a31fddac0b195adedeac22b0cecc56c5ffe1d30b928b8919cd80\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1750137e5f1a22acdfcb9b9de649a58767d8e5a42970d60a2b5b3675d6bfc4b5\"" Oct 27 08:24:08.770762 containerd[1628]: time="2025-10-27T08:24:08.770563430Z" level=info msg="StartContainer for \"1750137e5f1a22acdfcb9b9de649a58767d8e5a42970d60a2b5b3675d6bfc4b5\"" Oct 27 08:24:08.779771 containerd[1628]: time="2025-10-27T08:24:08.779727004Z" level=info msg="connecting to shim 1750137e5f1a22acdfcb9b9de649a58767d8e5a42970d60a2b5b3675d6bfc4b5" address="unix:///run/containerd/s/7b42f27709741afb03d6242558ff4d9eb23fb7efc8d1b9c0f5aaa5850c8f747c" protocol=ttrpc version=3 Oct 27 08:24:08.888720 systemd[1]: Started cri-containerd-1750137e5f1a22acdfcb9b9de649a58767d8e5a42970d60a2b5b3675d6bfc4b5.scope - libcontainer container 1750137e5f1a22acdfcb9b9de649a58767d8e5a42970d60a2b5b3675d6bfc4b5. Oct 27 08:24:08.945555 containerd[1628]: time="2025-10-27T08:24:08.944835546Z" level=info msg="StartContainer for \"1750137e5f1a22acdfcb9b9de649a58767d8e5a42970d60a2b5b3675d6bfc4b5\" returns successfully" Oct 27 08:24:09.051228 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 27 08:24:09.054814 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 27 08:24:09.358474 kubelet[2788]: I1027 08:24:09.357553 2788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-r9m24" podStartSLOduration=1.257591712 podStartE2EDuration="20.357534843s" podCreationTimestamp="2025-10-27 08:23:49 +0000 UTC" firstStartedPulling="2025-10-27 08:23:49.556377925 +0000 UTC m=+28.891115262" lastFinishedPulling="2025-10-27 08:24:08.656321055 +0000 UTC m=+47.991058393" observedRunningTime="2025-10-27 08:24:09.099377228 +0000 UTC m=+48.434114606" watchObservedRunningTime="2025-10-27 08:24:09.357534843 +0000 UTC m=+48.692272180" Oct 27 08:24:09.371102 containerd[1628]: time="2025-10-27T08:24:09.371030024Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1750137e5f1a22acdfcb9b9de649a58767d8e5a42970d60a2b5b3675d6bfc4b5\" id:\"2a7eeaccc1d8f4dcfe9f994898e1762cdfde58ecf2f2d7a636f35f91e7a623e6\" pid:3841 exit_status:1 exited_at:{seconds:1761553449 nanos:367291267}" Oct 27 08:24:09.435446 kubelet[2788]: I1027 08:24:09.435402 2788 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19247ceb-194d-4562-847c-8010afd7e20d-whisker-ca-bundle\") pod \"19247ceb-194d-4562-847c-8010afd7e20d\" (UID: \"19247ceb-194d-4562-847c-8010afd7e20d\") " Oct 27 08:24:09.435608 kubelet[2788]: I1027 08:24:09.435510 2788 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j69cz\" (UniqueName: \"kubernetes.io/projected/19247ceb-194d-4562-847c-8010afd7e20d-kube-api-access-j69cz\") pod \"19247ceb-194d-4562-847c-8010afd7e20d\" (UID: \"19247ceb-194d-4562-847c-8010afd7e20d\") " Oct 27 08:24:09.435608 kubelet[2788]: I1027 08:24:09.435583 2788 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/19247ceb-194d-4562-847c-8010afd7e20d-whisker-backend-key-pair\") pod \"19247ceb-194d-4562-847c-8010afd7e20d\" (UID: \"19247ceb-194d-4562-847c-8010afd7e20d\") " Oct 27 08:24:09.439662 kubelet[2788]: I1027 08:24:09.439515 2788 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19247ceb-194d-4562-847c-8010afd7e20d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "19247ceb-194d-4562-847c-8010afd7e20d" (UID: "19247ceb-194d-4562-847c-8010afd7e20d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 27 08:24:09.449334 kubelet[2788]: I1027 08:24:09.449292 2788 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19247ceb-194d-4562-847c-8010afd7e20d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "19247ceb-194d-4562-847c-8010afd7e20d" (UID: "19247ceb-194d-4562-847c-8010afd7e20d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 27 08:24:09.450598 kubelet[2788]: I1027 08:24:09.450055 2788 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19247ceb-194d-4562-847c-8010afd7e20d-kube-api-access-j69cz" (OuterVolumeSpecName: "kube-api-access-j69cz") pod "19247ceb-194d-4562-847c-8010afd7e20d" (UID: "19247ceb-194d-4562-847c-8010afd7e20d"). InnerVolumeSpecName "kube-api-access-j69cz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 27 08:24:09.536112 kubelet[2788]: I1027 08:24:09.536063 2788 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j69cz\" (UniqueName: \"kubernetes.io/projected/19247ceb-194d-4562-847c-8010afd7e20d-kube-api-access-j69cz\") on node \"ci-9999-9-9-k-f136f833c6\" DevicePath \"\"" Oct 27 08:24:09.536112 kubelet[2788]: I1027 08:24:09.536104 2788 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/19247ceb-194d-4562-847c-8010afd7e20d-whisker-backend-key-pair\") on node \"ci-9999-9-9-k-f136f833c6\" DevicePath \"\"" Oct 27 08:24:09.536112 kubelet[2788]: I1027 08:24:09.536116 2788 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19247ceb-194d-4562-847c-8010afd7e20d-whisker-ca-bundle\") on node \"ci-9999-9-9-k-f136f833c6\" DevicePath \"\"" Oct 27 08:24:09.552488 kubelet[2788]: I1027 08:24:09.552283 2788 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 27 08:24:09.610816 systemd[1]: var-lib-kubelet-pods-19247ceb\x2d194d\x2d4562\x2d847c\x2d8010afd7e20d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj69cz.mount: Deactivated successfully. Oct 27 08:24:09.611253 systemd[1]: var-lib-kubelet-pods-19247ceb\x2d194d\x2d4562\x2d847c\x2d8010afd7e20d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 27 08:24:10.093827 systemd[1]: Removed slice kubepods-besteffort-pod19247ceb_194d_4562_847c_8010afd7e20d.slice - libcontainer container kubepods-besteffort-pod19247ceb_194d_4562_847c_8010afd7e20d.slice. Oct 27 08:24:10.228903 systemd[1]: Created slice kubepods-besteffort-pod766fb522_b8e8_496d_9871_210f41ee5bf3.slice - libcontainer container kubepods-besteffort-pod766fb522_b8e8_496d_9871_210f41ee5bf3.slice. Oct 27 08:24:10.241440 kubelet[2788]: I1027 08:24:10.241382 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/766fb522-b8e8-496d-9871-210f41ee5bf3-whisker-backend-key-pair\") pod \"whisker-6b4b456d6b-4jfhq\" (UID: \"766fb522-b8e8-496d-9871-210f41ee5bf3\") " pod="calico-system/whisker-6b4b456d6b-4jfhq" Oct 27 08:24:10.241976 kubelet[2788]: I1027 08:24:10.241926 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j44vj\" (UniqueName: \"kubernetes.io/projected/766fb522-b8e8-496d-9871-210f41ee5bf3-kube-api-access-j44vj\") pod \"whisker-6b4b456d6b-4jfhq\" (UID: \"766fb522-b8e8-496d-9871-210f41ee5bf3\") " pod="calico-system/whisker-6b4b456d6b-4jfhq" Oct 27 08:24:10.242054 kubelet[2788]: I1027 08:24:10.242038 2788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/766fb522-b8e8-496d-9871-210f41ee5bf3-whisker-ca-bundle\") pod \"whisker-6b4b456d6b-4jfhq\" (UID: \"766fb522-b8e8-496d-9871-210f41ee5bf3\") " pod="calico-system/whisker-6b4b456d6b-4jfhq" Oct 27 08:24:10.322189 containerd[1628]: time="2025-10-27T08:24:10.322112730Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1750137e5f1a22acdfcb9b9de649a58767d8e5a42970d60a2b5b3675d6bfc4b5\" id:\"e38314c9c6bbfa22273f10e4749ad99f76c6909fa96f4702b8e47a24f2fc69ba\" pid:3888 exit_status:1 exited_at:{seconds:1761553450 nanos:321682595}" Oct 27 08:24:10.542065 containerd[1628]: time="2025-10-27T08:24:10.541913258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b4b456d6b-4jfhq,Uid:766fb522-b8e8-496d-9871-210f41ee5bf3,Namespace:calico-system,Attempt:0,}" Oct 27 08:24:10.800271 kubelet[2788]: I1027 08:24:10.800004 2788 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19247ceb-194d-4562-847c-8010afd7e20d" path="/var/lib/kubelet/pods/19247ceb-194d-4562-847c-8010afd7e20d/volumes" Oct 27 08:24:10.933596 systemd-networkd[1521]: cali2ac772eaa0e: Link UP Oct 27 08:24:10.933817 systemd-networkd[1521]: cali2ac772eaa0e: Gained carrier Oct 27 08:24:10.952134 containerd[1628]: 2025-10-27 08:24:10.596 [INFO][3932] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 27 08:24:10.952134 containerd[1628]: 2025-10-27 08:24:10.657 [INFO][3932] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999--9--9--k--f136f833c6-k8s-whisker--6b4b456d6b--4jfhq-eth0 whisker-6b4b456d6b- calico-system 766fb522-b8e8-496d-9871-210f41ee5bf3 891 0 2025-10-27 08:24:10 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6b4b456d6b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-9999-9-9-k-f136f833c6 whisker-6b4b456d6b-4jfhq eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2ac772eaa0e [] [] }} ContainerID="ff0f0cc8246aa94acf550b493676004128596946d2159b4397b968f67fe5d7b0" Namespace="calico-system" Pod="whisker-6b4b456d6b-4jfhq" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-whisker--6b4b456d6b--4jfhq-" Oct 27 08:24:10.952134 containerd[1628]: 2025-10-27 08:24:10.657 [INFO][3932] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ff0f0cc8246aa94acf550b493676004128596946d2159b4397b968f67fe5d7b0" Namespace="calico-system" Pod="whisker-6b4b456d6b-4jfhq" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-whisker--6b4b456d6b--4jfhq-eth0" Oct 27 08:24:10.952134 containerd[1628]: 2025-10-27 08:24:10.836 [INFO][3997] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff0f0cc8246aa94acf550b493676004128596946d2159b4397b968f67fe5d7b0" HandleID="k8s-pod-network.ff0f0cc8246aa94acf550b493676004128596946d2159b4397b968f67fe5d7b0" Workload="ci--9999--9--9--k--f136f833c6-k8s-whisker--6b4b456d6b--4jfhq-eth0" Oct 27 08:24:10.952361 containerd[1628]: 2025-10-27 08:24:10.840 [INFO][3997] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ff0f0cc8246aa94acf550b493676004128596946d2159b4397b968f67fe5d7b0" HandleID="k8s-pod-network.ff0f0cc8246aa94acf550b493676004128596946d2159b4397b968f67fe5d7b0" Workload="ci--9999--9--9--k--f136f833c6-k8s-whisker--6b4b456d6b--4jfhq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036c600), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-9999-9-9-k-f136f833c6", "pod":"whisker-6b4b456d6b-4jfhq", "timestamp":"2025-10-27 08:24:10.836969571 +0000 UTC"}, Hostname:"ci-9999-9-9-k-f136f833c6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:24:10.952361 containerd[1628]: 2025-10-27 08:24:10.840 [INFO][3997] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:24:10.952361 containerd[1628]: 2025-10-27 08:24:10.840 [INFO][3997] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:24:10.952361 containerd[1628]: 2025-10-27 08:24:10.841 [INFO][3997] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999-9-9-k-f136f833c6' Oct 27 08:24:10.952361 containerd[1628]: 2025-10-27 08:24:10.861 [INFO][3997] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ff0f0cc8246aa94acf550b493676004128596946d2159b4397b968f67fe5d7b0" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:10.952361 containerd[1628]: 2025-10-27 08:24:10.881 [INFO][3997] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:10.952361 containerd[1628]: 2025-10-27 08:24:10.888 [INFO][3997] ipam/ipam.go 511: Trying affinity for 192.168.122.192/26 host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:10.952361 containerd[1628]: 2025-10-27 08:24:10.893 [INFO][3997] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.192/26 host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:10.952361 containerd[1628]: 2025-10-27 08:24:10.896 [INFO][3997] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:10.952958 containerd[1628]: 2025-10-27 08:24:10.896 [INFO][3997] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.ff0f0cc8246aa94acf550b493676004128596946d2159b4397b968f67fe5d7b0" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:10.952958 containerd[1628]: 2025-10-27 08:24:10.899 [INFO][3997] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ff0f0cc8246aa94acf550b493676004128596946d2159b4397b968f67fe5d7b0 Oct 27 08:24:10.952958 containerd[1628]: 2025-10-27 08:24:10.908 [INFO][3997] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.ff0f0cc8246aa94acf550b493676004128596946d2159b4397b968f67fe5d7b0" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:10.952958 containerd[1628]: 2025-10-27 08:24:10.914 [INFO][3997] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.193/26] block=192.168.122.192/26 handle="k8s-pod-network.ff0f0cc8246aa94acf550b493676004128596946d2159b4397b968f67fe5d7b0" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:10.952958 containerd[1628]: 2025-10-27 08:24:10.914 [INFO][3997] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.193/26] handle="k8s-pod-network.ff0f0cc8246aa94acf550b493676004128596946d2159b4397b968f67fe5d7b0" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:10.952958 containerd[1628]: 2025-10-27 08:24:10.914 [INFO][3997] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:24:10.952958 containerd[1628]: 2025-10-27 08:24:10.914 [INFO][3997] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.193/26] IPv6=[] ContainerID="ff0f0cc8246aa94acf550b493676004128596946d2159b4397b968f67fe5d7b0" HandleID="k8s-pod-network.ff0f0cc8246aa94acf550b493676004128596946d2159b4397b968f67fe5d7b0" Workload="ci--9999--9--9--k--f136f833c6-k8s-whisker--6b4b456d6b--4jfhq-eth0" Oct 27 08:24:10.954725 containerd[1628]: 2025-10-27 08:24:10.917 [INFO][3932] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ff0f0cc8246aa94acf550b493676004128596946d2159b4397b968f67fe5d7b0" Namespace="calico-system" Pod="whisker-6b4b456d6b-4jfhq" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-whisker--6b4b456d6b--4jfhq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--9--k--f136f833c6-k8s-whisker--6b4b456d6b--4jfhq-eth0", GenerateName:"whisker-6b4b456d6b-", Namespace:"calico-system", SelfLink:"", UID:"766fb522-b8e8-496d-9871-210f41ee5bf3", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 24, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b4b456d6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-9-k-f136f833c6", ContainerID:"", Pod:"whisker-6b4b456d6b-4jfhq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.122.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2ac772eaa0e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:24:10.954725 containerd[1628]: 2025-10-27 08:24:10.917 [INFO][3932] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.193/32] ContainerID="ff0f0cc8246aa94acf550b493676004128596946d2159b4397b968f67fe5d7b0" Namespace="calico-system" Pod="whisker-6b4b456d6b-4jfhq" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-whisker--6b4b456d6b--4jfhq-eth0" Oct 27 08:24:10.954796 containerd[1628]: 2025-10-27 08:24:10.917 [INFO][3932] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ac772eaa0e ContainerID="ff0f0cc8246aa94acf550b493676004128596946d2159b4397b968f67fe5d7b0" Namespace="calico-system" Pod="whisker-6b4b456d6b-4jfhq" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-whisker--6b4b456d6b--4jfhq-eth0" Oct 27 08:24:10.954796 containerd[1628]: 2025-10-27 08:24:10.927 [INFO][3932] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff0f0cc8246aa94acf550b493676004128596946d2159b4397b968f67fe5d7b0" Namespace="calico-system" Pod="whisker-6b4b456d6b-4jfhq" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-whisker--6b4b456d6b--4jfhq-eth0" Oct 27 08:24:10.955227 containerd[1628]: 2025-10-27 08:24:10.928 [INFO][3932] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ff0f0cc8246aa94acf550b493676004128596946d2159b4397b968f67fe5d7b0" Namespace="calico-system" Pod="whisker-6b4b456d6b-4jfhq" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-whisker--6b4b456d6b--4jfhq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--9--k--f136f833c6-k8s-whisker--6b4b456d6b--4jfhq-eth0", GenerateName:"whisker-6b4b456d6b-", Namespace:"calico-system", SelfLink:"", UID:"766fb522-b8e8-496d-9871-210f41ee5bf3", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 24, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b4b456d6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-9-k-f136f833c6", ContainerID:"ff0f0cc8246aa94acf550b493676004128596946d2159b4397b968f67fe5d7b0", Pod:"whisker-6b4b456d6b-4jfhq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.122.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2ac772eaa0e", MAC:"9a:ab:94:46:b4:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:24:10.955657 containerd[1628]: 2025-10-27 08:24:10.946 [INFO][3932] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ff0f0cc8246aa94acf550b493676004128596946d2159b4397b968f67fe5d7b0" Namespace="calico-system" Pod="whisker-6b4b456d6b-4jfhq" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-whisker--6b4b456d6b--4jfhq-eth0" Oct 27 08:24:11.101909 containerd[1628]: time="2025-10-27T08:24:11.101409044Z" level=info msg="connecting to shim ff0f0cc8246aa94acf550b493676004128596946d2159b4397b968f67fe5d7b0" address="unix:///run/containerd/s/7ef088d15f439370869875d0a5a27d3f2a40f9052c01f8599ebf1fd798e1f945" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:24:11.148923 systemd[1]: Started cri-containerd-ff0f0cc8246aa94acf550b493676004128596946d2159b4397b968f67fe5d7b0.scope - libcontainer container ff0f0cc8246aa94acf550b493676004128596946d2159b4397b968f67fe5d7b0. Oct 27 08:24:11.239824 containerd[1628]: time="2025-10-27T08:24:11.239769229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b4b456d6b-4jfhq,Uid:766fb522-b8e8-496d-9871-210f41ee5bf3,Namespace:calico-system,Attempt:0,} returns sandbox id \"ff0f0cc8246aa94acf550b493676004128596946d2159b4397b968f67fe5d7b0\"" Oct 27 08:24:11.245900 containerd[1628]: time="2025-10-27T08:24:11.245552498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 27 08:24:11.257230 containerd[1628]: time="2025-10-27T08:24:11.257179242Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1750137e5f1a22acdfcb9b9de649a58767d8e5a42970d60a2b5b3675d6bfc4b5\" id:\"c685a1dcf9f1575294711cef09a90a4695fc9084b89f92d2cd16062ed1c3cafc\" pid:4071 exit_status:1 exited_at:{seconds:1761553451 nanos:256769510}" Oct 27 08:24:11.355378 systemd-networkd[1521]: vxlan.calico: Link UP Oct 27 08:24:11.355390 systemd-networkd[1521]: vxlan.calico: Gained carrier Oct 27 08:24:11.789400 containerd[1628]: time="2025-10-27T08:24:11.789249787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tphb5,Uid:d04f9940-0e64-4166-93cf-749a47710fc1,Namespace:kube-system,Attempt:0,}" Oct 27 08:24:11.793494 containerd[1628]: time="2025-10-27T08:24:11.791314861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-wd8vm,Uid:79766d3c-55af-44b2-853b-a76f9b90d865,Namespace:calico-system,Attempt:0,}" Oct 27 08:24:11.973784 systemd-networkd[1521]: calif9b13b4ba89: Link UP Oct 27 08:24:11.975642 systemd-networkd[1521]: calif9b13b4ba89: Gained carrier Oct 27 08:24:12.007787 containerd[1628]: 2025-10-27 08:24:11.852 [INFO][4193] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999--9--9--k--f136f833c6-k8s-goldmane--7c778bb748--wd8vm-eth0 goldmane-7c778bb748- calico-system 79766d3c-55af-44b2-853b-a76f9b90d865 816 0 2025-10-27 08:23:47 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-9999-9-9-k-f136f833c6 goldmane-7c778bb748-wd8vm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif9b13b4ba89 [] [] }} ContainerID="def5745614b89f5f88334135d4830ff2dc50f9557fa71b08d1ee61b22d61e4da" Namespace="calico-system" Pod="goldmane-7c778bb748-wd8vm" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-goldmane--7c778bb748--wd8vm-" Oct 27 08:24:12.007787 containerd[1628]: 2025-10-27 08:24:11.852 [INFO][4193] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="def5745614b89f5f88334135d4830ff2dc50f9557fa71b08d1ee61b22d61e4da" Namespace="calico-system" Pod="goldmane-7c778bb748-wd8vm" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-goldmane--7c778bb748--wd8vm-eth0" Oct 27 08:24:12.007787 containerd[1628]: 2025-10-27 08:24:11.905 [INFO][4208] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="def5745614b89f5f88334135d4830ff2dc50f9557fa71b08d1ee61b22d61e4da" HandleID="k8s-pod-network.def5745614b89f5f88334135d4830ff2dc50f9557fa71b08d1ee61b22d61e4da" Workload="ci--9999--9--9--k--f136f833c6-k8s-goldmane--7c778bb748--wd8vm-eth0" Oct 27 08:24:12.007999 containerd[1628]: 2025-10-27 08:24:11.906 [INFO][4208] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="def5745614b89f5f88334135d4830ff2dc50f9557fa71b08d1ee61b22d61e4da" HandleID="k8s-pod-network.def5745614b89f5f88334135d4830ff2dc50f9557fa71b08d1ee61b22d61e4da" Workload="ci--9999--9--9--k--f136f833c6-k8s-goldmane--7c778bb748--wd8vm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5830), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-9999-9-9-k-f136f833c6", "pod":"goldmane-7c778bb748-wd8vm", "timestamp":"2025-10-27 08:24:11.905783847 +0000 UTC"}, Hostname:"ci-9999-9-9-k-f136f833c6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:24:12.007999 containerd[1628]: 2025-10-27 08:24:11.906 [INFO][4208] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:24:12.007999 containerd[1628]: 2025-10-27 08:24:11.906 [INFO][4208] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:24:12.007999 containerd[1628]: 2025-10-27 08:24:11.906 [INFO][4208] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999-9-9-k-f136f833c6' Oct 27 08:24:12.007999 containerd[1628]: 2025-10-27 08:24:11.917 [INFO][4208] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.def5745614b89f5f88334135d4830ff2dc50f9557fa71b08d1ee61b22d61e4da" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:12.007999 containerd[1628]: 2025-10-27 08:24:11.923 [INFO][4208] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:12.007999 containerd[1628]: 2025-10-27 08:24:11.929 [INFO][4208] ipam/ipam.go 511: Trying affinity for 192.168.122.192/26 host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:12.007999 containerd[1628]: 2025-10-27 08:24:11.931 [INFO][4208] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.192/26 host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:12.007999 containerd[1628]: 2025-10-27 08:24:11.934 [INFO][4208] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:12.008800 containerd[1628]: 2025-10-27 08:24:11.935 [INFO][4208] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.def5745614b89f5f88334135d4830ff2dc50f9557fa71b08d1ee61b22d61e4da" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:12.008800 containerd[1628]: 2025-10-27 08:24:11.937 [INFO][4208] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.def5745614b89f5f88334135d4830ff2dc50f9557fa71b08d1ee61b22d61e4da Oct 27 08:24:12.008800 containerd[1628]: 2025-10-27 08:24:11.944 [INFO][4208] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.def5745614b89f5f88334135d4830ff2dc50f9557fa71b08d1ee61b22d61e4da" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:12.008800 containerd[1628]: 2025-10-27 08:24:11.954 [INFO][4208] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.194/26] block=192.168.122.192/26 handle="k8s-pod-network.def5745614b89f5f88334135d4830ff2dc50f9557fa71b08d1ee61b22d61e4da" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:12.008800 containerd[1628]: 2025-10-27 08:24:11.954 [INFO][4208] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.194/26] handle="k8s-pod-network.def5745614b89f5f88334135d4830ff2dc50f9557fa71b08d1ee61b22d61e4da" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:12.008800 containerd[1628]: 2025-10-27 08:24:11.955 [INFO][4208] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:24:12.008800 containerd[1628]: 2025-10-27 08:24:11.955 [INFO][4208] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.194/26] IPv6=[] ContainerID="def5745614b89f5f88334135d4830ff2dc50f9557fa71b08d1ee61b22d61e4da" HandleID="k8s-pod-network.def5745614b89f5f88334135d4830ff2dc50f9557fa71b08d1ee61b22d61e4da" Workload="ci--9999--9--9--k--f136f833c6-k8s-goldmane--7c778bb748--wd8vm-eth0" Oct 27 08:24:12.009579 containerd[1628]: 2025-10-27 08:24:11.966 [INFO][4193] cni-plugin/k8s.go 418: Populated endpoint ContainerID="def5745614b89f5f88334135d4830ff2dc50f9557fa71b08d1ee61b22d61e4da" Namespace="calico-system" Pod="goldmane-7c778bb748-wd8vm" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-goldmane--7c778bb748--wd8vm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--9--k--f136f833c6-k8s-goldmane--7c778bb748--wd8vm-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"79766d3c-55af-44b2-853b-a76f9b90d865", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 23, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-9-k-f136f833c6", ContainerID:"", Pod:"goldmane-7c778bb748-wd8vm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.122.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif9b13b4ba89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:24:12.009636 containerd[1628]: 2025-10-27 08:24:11.967 [INFO][4193] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.194/32] ContainerID="def5745614b89f5f88334135d4830ff2dc50f9557fa71b08d1ee61b22d61e4da" Namespace="calico-system" Pod="goldmane-7c778bb748-wd8vm" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-goldmane--7c778bb748--wd8vm-eth0" Oct 27 08:24:12.009636 containerd[1628]: 2025-10-27 08:24:11.967 [INFO][4193] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif9b13b4ba89 ContainerID="def5745614b89f5f88334135d4830ff2dc50f9557fa71b08d1ee61b22d61e4da" Namespace="calico-system" Pod="goldmane-7c778bb748-wd8vm" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-goldmane--7c778bb748--wd8vm-eth0" Oct 27 08:24:12.009636 containerd[1628]: 2025-10-27 08:24:11.977 [INFO][4193] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="def5745614b89f5f88334135d4830ff2dc50f9557fa71b08d1ee61b22d61e4da" Namespace="calico-system" Pod="goldmane-7c778bb748-wd8vm" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-goldmane--7c778bb748--wd8vm-eth0" Oct 27 08:24:12.009691 containerd[1628]: 2025-10-27 08:24:11.977 [INFO][4193] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="def5745614b89f5f88334135d4830ff2dc50f9557fa71b08d1ee61b22d61e4da" Namespace="calico-system" Pod="goldmane-7c778bb748-wd8vm" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-goldmane--7c778bb748--wd8vm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--9--k--f136f833c6-k8s-goldmane--7c778bb748--wd8vm-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"79766d3c-55af-44b2-853b-a76f9b90d865", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 23, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-9-k-f136f833c6", ContainerID:"def5745614b89f5f88334135d4830ff2dc50f9557fa71b08d1ee61b22d61e4da", Pod:"goldmane-7c778bb748-wd8vm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.122.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif9b13b4ba89", MAC:"de:34:9d:0c:e8:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:24:12.009752 containerd[1628]: 2025-10-27 08:24:12.003 [INFO][4193] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="def5745614b89f5f88334135d4830ff2dc50f9557fa71b08d1ee61b22d61e4da" Namespace="calico-system" Pod="goldmane-7c778bb748-wd8vm" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-goldmane--7c778bb748--wd8vm-eth0" Oct 27 08:24:12.039951 containerd[1628]: time="2025-10-27T08:24:12.039844333Z" level=info msg="connecting to shim def5745614b89f5f88334135d4830ff2dc50f9557fa71b08d1ee61b22d61e4da" address="unix:///run/containerd/s/fb84c1a822da6aab48ce24e68aa3ed739828a57d0f024e5831fec4cf36479d7a" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:24:12.071117 systemd-networkd[1521]: caliaed6ee5a9d9: Link UP Oct 27 08:24:12.071611 systemd-networkd[1521]: caliaed6ee5a9d9: Gained carrier Oct 27 08:24:12.086602 systemd[1]: Started cri-containerd-def5745614b89f5f88334135d4830ff2dc50f9557fa71b08d1ee61b22d61e4da.scope - libcontainer container def5745614b89f5f88334135d4830ff2dc50f9557fa71b08d1ee61b22d61e4da. Oct 27 08:24:12.096761 containerd[1628]: 2025-10-27 08:24:11.854 [INFO][4184] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999--9--9--k--f136f833c6-k8s-coredns--66bc5c9577--tphb5-eth0 coredns-66bc5c9577- kube-system d04f9940-0e64-4166-93cf-749a47710fc1 808 0 2025-10-27 08:23:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-9999-9-9-k-f136f833c6 coredns-66bc5c9577-tphb5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaed6ee5a9d9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="73ec9d04aaa37861b186e0a0595e860c1c4c47c68aa254c8660df2ae921dfd31" Namespace="kube-system" Pod="coredns-66bc5c9577-tphb5" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-coredns--66bc5c9577--tphb5-" Oct 27 08:24:12.096761 containerd[1628]: 2025-10-27 08:24:11.855 [INFO][4184] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="73ec9d04aaa37861b186e0a0595e860c1c4c47c68aa254c8660df2ae921dfd31" Namespace="kube-system" Pod="coredns-66bc5c9577-tphb5" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-coredns--66bc5c9577--tphb5-eth0" Oct 27 08:24:12.096761 containerd[1628]: 2025-10-27 08:24:11.910 [INFO][4213] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="73ec9d04aaa37861b186e0a0595e860c1c4c47c68aa254c8660df2ae921dfd31" HandleID="k8s-pod-network.73ec9d04aaa37861b186e0a0595e860c1c4c47c68aa254c8660df2ae921dfd31" Workload="ci--9999--9--9--k--f136f833c6-k8s-coredns--66bc5c9577--tphb5-eth0" Oct 27 08:24:12.097004 containerd[1628]: 2025-10-27 08:24:11.910 [INFO][4213] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="73ec9d04aaa37861b186e0a0595e860c1c4c47c68aa254c8660df2ae921dfd31" HandleID="k8s-pod-network.73ec9d04aaa37861b186e0a0595e860c1c4c47c68aa254c8660df2ae921dfd31" Workload="ci--9999--9--9--k--f136f833c6-k8s-coredns--66bc5c9577--tphb5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad5b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-9999-9-9-k-f136f833c6", "pod":"coredns-66bc5c9577-tphb5", "timestamp":"2025-10-27 08:24:11.910529393 +0000 UTC"}, Hostname:"ci-9999-9-9-k-f136f833c6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:24:12.097004 containerd[1628]: 2025-10-27 08:24:11.910 [INFO][4213] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:24:12.097004 containerd[1628]: 2025-10-27 08:24:11.955 [INFO][4213] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:24:12.097004 containerd[1628]: 2025-10-27 08:24:11.956 [INFO][4213] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999-9-9-k-f136f833c6' Oct 27 08:24:12.097004 containerd[1628]: 2025-10-27 08:24:12.017 [INFO][4213] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.73ec9d04aaa37861b186e0a0595e860c1c4c47c68aa254c8660df2ae921dfd31" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:12.097004 containerd[1628]: 2025-10-27 08:24:12.030 [INFO][4213] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:12.097004 containerd[1628]: 2025-10-27 08:24:12.039 [INFO][4213] ipam/ipam.go 511: Trying affinity for 192.168.122.192/26 host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:12.097004 containerd[1628]: 2025-10-27 08:24:12.043 [INFO][4213] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.192/26 host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:12.097004 containerd[1628]: 2025-10-27 08:24:12.046 [INFO][4213] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:12.097986 containerd[1628]: 2025-10-27 08:24:12.046 [INFO][4213] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.73ec9d04aaa37861b186e0a0595e860c1c4c47c68aa254c8660df2ae921dfd31" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:12.097986 containerd[1628]: 2025-10-27 08:24:12.048 [INFO][4213] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.73ec9d04aaa37861b186e0a0595e860c1c4c47c68aa254c8660df2ae921dfd31 Oct 27 08:24:12.097986 containerd[1628]: 2025-10-27 08:24:12.055 [INFO][4213] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.73ec9d04aaa37861b186e0a0595e860c1c4c47c68aa254c8660df2ae921dfd31" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:12.097986 containerd[1628]: 2025-10-27 08:24:12.062 [INFO][4213] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.195/26] block=192.168.122.192/26 handle="k8s-pod-network.73ec9d04aaa37861b186e0a0595e860c1c4c47c68aa254c8660df2ae921dfd31" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:12.097986 containerd[1628]: 2025-10-27 08:24:12.062 [INFO][4213] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.195/26] handle="k8s-pod-network.73ec9d04aaa37861b186e0a0595e860c1c4c47c68aa254c8660df2ae921dfd31" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:12.097986 containerd[1628]: 2025-10-27 08:24:12.063 [INFO][4213] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:24:12.097986 containerd[1628]: 2025-10-27 08:24:12.063 [INFO][4213] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.195/26] IPv6=[] ContainerID="73ec9d04aaa37861b186e0a0595e860c1c4c47c68aa254c8660df2ae921dfd31" HandleID="k8s-pod-network.73ec9d04aaa37861b186e0a0595e860c1c4c47c68aa254c8660df2ae921dfd31" Workload="ci--9999--9--9--k--f136f833c6-k8s-coredns--66bc5c9577--tphb5-eth0" Oct 27 08:24:12.098198 containerd[1628]: 2025-10-27 08:24:12.068 [INFO][4184] cni-plugin/k8s.go 418: Populated endpoint ContainerID="73ec9d04aaa37861b186e0a0595e860c1c4c47c68aa254c8660df2ae921dfd31" Namespace="kube-system" Pod="coredns-66bc5c9577-tphb5" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-coredns--66bc5c9577--tphb5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--9--k--f136f833c6-k8s-coredns--66bc5c9577--tphb5-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d04f9940-0e64-4166-93cf-749a47710fc1", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 23, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-9-k-f136f833c6", ContainerID:"", Pod:"coredns-66bc5c9577-tphb5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaed6ee5a9d9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:24:12.098198 containerd[1628]: 2025-10-27 08:24:12.068 [INFO][4184] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.195/32] ContainerID="73ec9d04aaa37861b186e0a0595e860c1c4c47c68aa254c8660df2ae921dfd31" Namespace="kube-system" Pod="coredns-66bc5c9577-tphb5" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-coredns--66bc5c9577--tphb5-eth0" Oct 27 08:24:12.098198 containerd[1628]: 2025-10-27 08:24:12.068 [INFO][4184] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaed6ee5a9d9 ContainerID="73ec9d04aaa37861b186e0a0595e860c1c4c47c68aa254c8660df2ae921dfd31" Namespace="kube-system" Pod="coredns-66bc5c9577-tphb5" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-coredns--66bc5c9577--tphb5-eth0" Oct 27 08:24:12.098198 containerd[1628]: 2025-10-27 08:24:12.071 [INFO][4184] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="73ec9d04aaa37861b186e0a0595e860c1c4c47c68aa254c8660df2ae921dfd31" Namespace="kube-system" Pod="coredns-66bc5c9577-tphb5" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-coredns--66bc5c9577--tphb5-eth0" Oct 27 08:24:12.098198 containerd[1628]: 2025-10-27 08:24:12.072 [INFO][4184] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="73ec9d04aaa37861b186e0a0595e860c1c4c47c68aa254c8660df2ae921dfd31" Namespace="kube-system" Pod="coredns-66bc5c9577-tphb5" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-coredns--66bc5c9577--tphb5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--9--k--f136f833c6-k8s-coredns--66bc5c9577--tphb5-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d04f9940-0e64-4166-93cf-749a47710fc1", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 23, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-9-k-f136f833c6", ContainerID:"73ec9d04aaa37861b186e0a0595e860c1c4c47c68aa254c8660df2ae921dfd31", Pod:"coredns-66bc5c9577-tphb5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaed6ee5a9d9", MAC:"7a:df:ff:2b:f4:ed", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:24:12.098668 containerd[1628]: 2025-10-27 08:24:12.093 [INFO][4184] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="73ec9d04aaa37861b186e0a0595e860c1c4c47c68aa254c8660df2ae921dfd31" Namespace="kube-system" Pod="coredns-66bc5c9577-tphb5" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-coredns--66bc5c9577--tphb5-eth0" Oct 27 08:24:12.115405 containerd[1628]: time="2025-10-27T08:24:12.115321746Z" level=info msg="connecting to shim 73ec9d04aaa37861b186e0a0595e860c1c4c47c68aa254c8660df2ae921dfd31" address="unix:///run/containerd/s/432f5ad2a5e369e3d00be002360890e908a707a8efde35ea58ef9a70abb7b66b" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:24:12.146565 systemd[1]: Started cri-containerd-73ec9d04aaa37861b186e0a0595e860c1c4c47c68aa254c8660df2ae921dfd31.scope - libcontainer container 73ec9d04aaa37861b186e0a0595e860c1c4c47c68aa254c8660df2ae921dfd31. Oct 27 08:24:12.200139 containerd[1628]: time="2025-10-27T08:24:12.200100839Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:24:12.202869 containerd[1628]: time="2025-10-27T08:24:12.202255558Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 27 08:24:12.203930 containerd[1628]: time="2025-10-27T08:24:12.203658218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 27 08:24:12.205085 kubelet[2788]: E1027 08:24:12.204249 2788 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:24:12.205085 kubelet[2788]: E1027 08:24:12.204293 2788 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:24:12.205085 kubelet[2788]: E1027 08:24:12.204372 2788 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6b4b456d6b-4jfhq_calico-system(766fb522-b8e8-496d-9871-210f41ee5bf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 27 08:24:12.208384 containerd[1628]: time="2025-10-27T08:24:12.208315267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 27 08:24:12.209294 containerd[1628]: time="2025-10-27T08:24:12.209274791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-wd8vm,Uid:79766d3c-55af-44b2-853b-a76f9b90d865,Namespace:calico-system,Attempt:0,} returns sandbox id \"def5745614b89f5f88334135d4830ff2dc50f9557fa71b08d1ee61b22d61e4da\"" Oct 27 08:24:12.243918 containerd[1628]: time="2025-10-27T08:24:12.243836561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tphb5,Uid:d04f9940-0e64-4166-93cf-749a47710fc1,Namespace:kube-system,Attempt:0,} returns sandbox id \"73ec9d04aaa37861b186e0a0595e860c1c4c47c68aa254c8660df2ae921dfd31\"" Oct 27 08:24:12.254352 containerd[1628]: time="2025-10-27T08:24:12.254300053Z" level=info msg="CreateContainer within sandbox \"73ec9d04aaa37861b186e0a0595e860c1c4c47c68aa254c8660df2ae921dfd31\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 27 08:24:12.265924 containerd[1628]: time="2025-10-27T08:24:12.265239273Z" level=info msg="Container 1e8c6c909215fea11941563d9eed51f6f9f7ca5382d287bd5e265a54e06ec2e5: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:24:12.271027 containerd[1628]: time="2025-10-27T08:24:12.270937855Z" level=info msg="CreateContainer within sandbox \"73ec9d04aaa37861b186e0a0595e860c1c4c47c68aa254c8660df2ae921dfd31\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1e8c6c909215fea11941563d9eed51f6f9f7ca5382d287bd5e265a54e06ec2e5\"" Oct 27 08:24:12.271928 containerd[1628]: time="2025-10-27T08:24:12.271897281Z" level=info msg="StartContainer for \"1e8c6c909215fea11941563d9eed51f6f9f7ca5382d287bd5e265a54e06ec2e5\"" Oct 27 08:24:12.274226 containerd[1628]: time="2025-10-27T08:24:12.274180901Z" level=info msg="connecting to shim 1e8c6c909215fea11941563d9eed51f6f9f7ca5382d287bd5e265a54e06ec2e5" address="unix:///run/containerd/s/432f5ad2a5e369e3d00be002360890e908a707a8efde35ea58ef9a70abb7b66b" protocol=ttrpc version=3 Oct 27 08:24:12.301649 systemd[1]: Started cri-containerd-1e8c6c909215fea11941563d9eed51f6f9f7ca5382d287bd5e265a54e06ec2e5.scope - libcontainer container 1e8c6c909215fea11941563d9eed51f6f9f7ca5382d287bd5e265a54e06ec2e5. Oct 27 08:24:12.334331 containerd[1628]: time="2025-10-27T08:24:12.334292776Z" level=info msg="StartContainer for \"1e8c6c909215fea11941563d9eed51f6f9f7ca5382d287bd5e265a54e06ec2e5\" returns successfully" Oct 27 08:24:12.684437 containerd[1628]: time="2025-10-27T08:24:12.684373372Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:24:12.685488 containerd[1628]: time="2025-10-27T08:24:12.685426400Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 27 08:24:12.685600 containerd[1628]: time="2025-10-27T08:24:12.685540052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 27 08:24:12.685827 kubelet[2788]: E1027 08:24:12.685775 2788 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:24:12.685934 kubelet[2788]: E1027 08:24:12.685832 2788 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:24:12.686104 kubelet[2788]: E1027 08:24:12.686047 2788 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6b4b456d6b-4jfhq_calico-system(766fb522-b8e8-496d-9871-210f41ee5bf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 27 08:24:12.686569 containerd[1628]: time="2025-10-27T08:24:12.686443507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 27 08:24:12.692219 kubelet[2788]: E1027 08:24:12.692171 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b4b456d6b-4jfhq" podUID="766fb522-b8e8-496d-9871-210f41ee5bf3" Oct 27 08:24:12.791001 containerd[1628]: time="2025-10-27T08:24:12.790549438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68d8c5c9bc-jsc7m,Uid:96a60c22-8a13-49d1-8749-b73cb7e464a7,Namespace:calico-apiserver,Attempt:0,}" Oct 27 08:24:12.792393 containerd[1628]: time="2025-10-27T08:24:12.792337491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74d68549b8-grhgf,Uid:e5f8aee0-010a-43df-b3cc-29e7716b4073,Namespace:calico-system,Attempt:0,}" Oct 27 08:24:12.965623 systemd-networkd[1521]: calidcb37f17bfe: Link UP Oct 27 08:24:12.967365 systemd-networkd[1521]: calidcb37f17bfe: Gained carrier Oct 27 08:24:12.986595 systemd-networkd[1521]: cali2ac772eaa0e: Gained IPv6LL Oct 27 08:24:12.997313 containerd[1628]: 2025-10-27 08:24:12.860 [INFO][4367] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999--9--9--k--f136f833c6-k8s-calico--kube--controllers--74d68549b8--grhgf-eth0 calico-kube-controllers-74d68549b8- calico-system e5f8aee0-010a-43df-b3cc-29e7716b4073 814 0 2025-10-27 08:23:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:74d68549b8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-9999-9-9-k-f136f833c6 calico-kube-controllers-74d68549b8-grhgf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calidcb37f17bfe [] [] }} ContainerID="f379ba86dfba33f033010beba04512dc2c262665f7d181a10cd9bb0f1a9f5bb3" Namespace="calico-system" Pod="calico-kube-controllers-74d68549b8-grhgf" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-calico--kube--controllers--74d68549b8--grhgf-" Oct 27 08:24:12.997313 containerd[1628]: 2025-10-27 08:24:12.861 [INFO][4367] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f379ba86dfba33f033010beba04512dc2c262665f7d181a10cd9bb0f1a9f5bb3" Namespace="calico-system" Pod="calico-kube-controllers-74d68549b8-grhgf" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-calico--kube--controllers--74d68549b8--grhgf-eth0" Oct 27 08:24:12.997313 containerd[1628]: 2025-10-27 08:24:12.912 [INFO][4391] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f379ba86dfba33f033010beba04512dc2c262665f7d181a10cd9bb0f1a9f5bb3" HandleID="k8s-pod-network.f379ba86dfba33f033010beba04512dc2c262665f7d181a10cd9bb0f1a9f5bb3" Workload="ci--9999--9--9--k--f136f833c6-k8s-calico--kube--controllers--74d68549b8--grhgf-eth0" Oct 27 08:24:12.997313 containerd[1628]: 2025-10-27 08:24:12.912 [INFO][4391] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f379ba86dfba33f033010beba04512dc2c262665f7d181a10cd9bb0f1a9f5bb3" HandleID="k8s-pod-network.f379ba86dfba33f033010beba04512dc2c262665f7d181a10cd9bb0f1a9f5bb3" Workload="ci--9999--9--9--k--f136f833c6-k8s-calico--kube--controllers--74d68549b8--grhgf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5000), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-9999-9-9-k-f136f833c6", "pod":"calico-kube-controllers-74d68549b8-grhgf", "timestamp":"2025-10-27 08:24:12.912048909 +0000 UTC"}, Hostname:"ci-9999-9-9-k-f136f833c6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:24:12.997313 containerd[1628]: 2025-10-27 08:24:12.912 [INFO][4391] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:24:12.997313 containerd[1628]: 2025-10-27 08:24:12.912 [INFO][4391] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:24:12.997313 containerd[1628]: 2025-10-27 08:24:12.912 [INFO][4391] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999-9-9-k-f136f833c6' Oct 27 08:24:12.997313 containerd[1628]: 2025-10-27 08:24:12.923 [INFO][4391] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f379ba86dfba33f033010beba04512dc2c262665f7d181a10cd9bb0f1a9f5bb3" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:12.997313 containerd[1628]: 2025-10-27 08:24:12.929 [INFO][4391] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:12.997313 containerd[1628]: 2025-10-27 08:24:12.937 [INFO][4391] ipam/ipam.go 511: Trying affinity for 192.168.122.192/26 host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:12.997313 containerd[1628]: 2025-10-27 08:24:12.940 [INFO][4391] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.192/26 host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:12.997313 containerd[1628]: 2025-10-27 08:24:12.942 [INFO][4391] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:12.997313 containerd[1628]: 2025-10-27 08:24:12.942 [INFO][4391] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.f379ba86dfba33f033010beba04512dc2c262665f7d181a10cd9bb0f1a9f5bb3" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:12.997313 containerd[1628]: 2025-10-27 08:24:12.945 [INFO][4391] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f379ba86dfba33f033010beba04512dc2c262665f7d181a10cd9bb0f1a9f5bb3 Oct 27 08:24:12.997313 containerd[1628]: 2025-10-27 08:24:12.950 [INFO][4391] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.f379ba86dfba33f033010beba04512dc2c262665f7d181a10cd9bb0f1a9f5bb3" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:12.997313 containerd[1628]: 2025-10-27 08:24:12.956 [INFO][4391] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.196/26] block=192.168.122.192/26 handle="k8s-pod-network.f379ba86dfba33f033010beba04512dc2c262665f7d181a10cd9bb0f1a9f5bb3" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:12.997313 containerd[1628]: 2025-10-27 08:24:12.956 [INFO][4391] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.196/26] handle="k8s-pod-network.f379ba86dfba33f033010beba04512dc2c262665f7d181a10cd9bb0f1a9f5bb3" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:12.997313 containerd[1628]: 2025-10-27 08:24:12.956 [INFO][4391] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:24:12.997313 containerd[1628]: 2025-10-27 08:24:12.956 [INFO][4391] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.196/26] IPv6=[] ContainerID="f379ba86dfba33f033010beba04512dc2c262665f7d181a10cd9bb0f1a9f5bb3" HandleID="k8s-pod-network.f379ba86dfba33f033010beba04512dc2c262665f7d181a10cd9bb0f1a9f5bb3" Workload="ci--9999--9--9--k--f136f833c6-k8s-calico--kube--controllers--74d68549b8--grhgf-eth0" Oct 27 08:24:13.001547 containerd[1628]: 2025-10-27 08:24:12.959 [INFO][4367] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f379ba86dfba33f033010beba04512dc2c262665f7d181a10cd9bb0f1a9f5bb3" Namespace="calico-system" Pod="calico-kube-controllers-74d68549b8-grhgf" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-calico--kube--controllers--74d68549b8--grhgf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--9--k--f136f833c6-k8s-calico--kube--controllers--74d68549b8--grhgf-eth0", GenerateName:"calico-kube-controllers-74d68549b8-", Namespace:"calico-system", SelfLink:"", UID:"e5f8aee0-010a-43df-b3cc-29e7716b4073", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 23, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74d68549b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-9-k-f136f833c6", ContainerID:"", Pod:"calico-kube-controllers-74d68549b8-grhgf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.122.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidcb37f17bfe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:24:13.001547 containerd[1628]: 2025-10-27 08:24:12.959 [INFO][4367] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.196/32] ContainerID="f379ba86dfba33f033010beba04512dc2c262665f7d181a10cd9bb0f1a9f5bb3" Namespace="calico-system" Pod="calico-kube-controllers-74d68549b8-grhgf" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-calico--kube--controllers--74d68549b8--grhgf-eth0" Oct 27 08:24:13.001547 containerd[1628]: 2025-10-27 08:24:12.959 [INFO][4367] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidcb37f17bfe ContainerID="f379ba86dfba33f033010beba04512dc2c262665f7d181a10cd9bb0f1a9f5bb3" Namespace="calico-system" Pod="calico-kube-controllers-74d68549b8-grhgf" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-calico--kube--controllers--74d68549b8--grhgf-eth0" Oct 27 08:24:13.001547 containerd[1628]: 2025-10-27 08:24:12.969 [INFO][4367] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f379ba86dfba33f033010beba04512dc2c262665f7d181a10cd9bb0f1a9f5bb3" Namespace="calico-system" Pod="calico-kube-controllers-74d68549b8-grhgf" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-calico--kube--controllers--74d68549b8--grhgf-eth0" Oct 27 08:24:13.001547 containerd[1628]: 2025-10-27 08:24:12.970 [INFO][4367] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f379ba86dfba33f033010beba04512dc2c262665f7d181a10cd9bb0f1a9f5bb3" Namespace="calico-system" Pod="calico-kube-controllers-74d68549b8-grhgf" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-calico--kube--controllers--74d68549b8--grhgf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--9--k--f136f833c6-k8s-calico--kube--controllers--74d68549b8--grhgf-eth0", GenerateName:"calico-kube-controllers-74d68549b8-", Namespace:"calico-system", SelfLink:"", UID:"e5f8aee0-010a-43df-b3cc-29e7716b4073", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 23, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74d68549b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-9-k-f136f833c6", ContainerID:"f379ba86dfba33f033010beba04512dc2c262665f7d181a10cd9bb0f1a9f5bb3", Pod:"calico-kube-controllers-74d68549b8-grhgf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.122.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidcb37f17bfe", MAC:"72:20:e1:45:07:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:24:13.001547 containerd[1628]: 2025-10-27 08:24:12.984 [INFO][4367] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f379ba86dfba33f033010beba04512dc2c262665f7d181a10cd9bb0f1a9f5bb3" Namespace="calico-system" Pod="calico-kube-controllers-74d68549b8-grhgf" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-calico--kube--controllers--74d68549b8--grhgf-eth0" Oct 27 08:24:13.028313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3686711367.mount: Deactivated successfully. Oct 27 08:24:13.048613 systemd-networkd[1521]: vxlan.calico: Gained IPv6LL Oct 27 08:24:13.055505 containerd[1628]: time="2025-10-27T08:24:13.053650219Z" level=info msg="connecting to shim f379ba86dfba33f033010beba04512dc2c262665f7d181a10cd9bb0f1a9f5bb3" address="unix:///run/containerd/s/bd2cec44612b98115301b5f51ba2e4d8340ac57f78f5c4668c88430d68a831b8" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:24:13.095745 systemd-networkd[1521]: cali0c3fbba1f3f: Link UP Oct 27 08:24:13.096600 systemd-networkd[1521]: cali0c3fbba1f3f: Gained carrier Oct 27 08:24:13.117882 systemd[1]: Started cri-containerd-f379ba86dfba33f033010beba04512dc2c262665f7d181a10cd9bb0f1a9f5bb3.scope - libcontainer container f379ba86dfba33f033010beba04512dc2c262665f7d181a10cd9bb0f1a9f5bb3. Oct 27 08:24:13.134047 containerd[1628]: 2025-10-27 08:24:12.869 [INFO][4366] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999--9--9--k--f136f833c6-k8s-calico--apiserver--68d8c5c9bc--jsc7m-eth0 calico-apiserver-68d8c5c9bc- calico-apiserver 96a60c22-8a13-49d1-8749-b73cb7e464a7 815 0 2025-10-27 08:23:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68d8c5c9bc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-9999-9-9-k-f136f833c6 calico-apiserver-68d8c5c9bc-jsc7m eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0c3fbba1f3f [] [] }} ContainerID="51964c1b58207ff626483e0a532b4e31818cdb032d2e240f5d8d02011ae6d6cd" Namespace="calico-apiserver" Pod="calico-apiserver-68d8c5c9bc-jsc7m" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-calico--apiserver--68d8c5c9bc--jsc7m-" Oct 27 08:24:13.134047 containerd[1628]: 2025-10-27 08:24:12.869 [INFO][4366] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="51964c1b58207ff626483e0a532b4e31818cdb032d2e240f5d8d02011ae6d6cd" Namespace="calico-apiserver" Pod="calico-apiserver-68d8c5c9bc-jsc7m" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-calico--apiserver--68d8c5c9bc--jsc7m-eth0" Oct 27 08:24:13.134047 containerd[1628]: 2025-10-27 08:24:12.921 [INFO][4395] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="51964c1b58207ff626483e0a532b4e31818cdb032d2e240f5d8d02011ae6d6cd" HandleID="k8s-pod-network.51964c1b58207ff626483e0a532b4e31818cdb032d2e240f5d8d02011ae6d6cd" Workload="ci--9999--9--9--k--f136f833c6-k8s-calico--apiserver--68d8c5c9bc--jsc7m-eth0" Oct 27 08:24:13.134047 containerd[1628]: 2025-10-27 08:24:12.922 [INFO][4395] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="51964c1b58207ff626483e0a532b4e31818cdb032d2e240f5d8d02011ae6d6cd" HandleID="k8s-pod-network.51964c1b58207ff626483e0a532b4e31818cdb032d2e240f5d8d02011ae6d6cd" Workload="ci--9999--9--9--k--f136f833c6-k8s-calico--apiserver--68d8c5c9bc--jsc7m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5090), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-9999-9-9-k-f136f833c6", "pod":"calico-apiserver-68d8c5c9bc-jsc7m", "timestamp":"2025-10-27 08:24:12.921414686 +0000 UTC"}, Hostname:"ci-9999-9-9-k-f136f833c6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:24:13.134047 containerd[1628]: 2025-10-27 08:24:12.922 [INFO][4395] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:24:13.134047 containerd[1628]: 2025-10-27 08:24:12.956 [INFO][4395] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:24:13.134047 containerd[1628]: 2025-10-27 08:24:12.957 [INFO][4395] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999-9-9-k-f136f833c6' Oct 27 08:24:13.134047 containerd[1628]: 2025-10-27 08:24:13.024 [INFO][4395] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.51964c1b58207ff626483e0a532b4e31818cdb032d2e240f5d8d02011ae6d6cd" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:13.134047 containerd[1628]: 2025-10-27 08:24:13.038 [INFO][4395] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:13.134047 containerd[1628]: 2025-10-27 08:24:13.054 [INFO][4395] ipam/ipam.go 511: Trying affinity for 192.168.122.192/26 host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:13.134047 containerd[1628]: 2025-10-27 08:24:13.057 [INFO][4395] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.192/26 host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:13.134047 containerd[1628]: 2025-10-27 08:24:13.060 [INFO][4395] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:13.134047 containerd[1628]: 2025-10-27 08:24:13.060 [INFO][4395] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.51964c1b58207ff626483e0a532b4e31818cdb032d2e240f5d8d02011ae6d6cd" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:13.134047 containerd[1628]: 2025-10-27 08:24:13.065 [INFO][4395] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.51964c1b58207ff626483e0a532b4e31818cdb032d2e240f5d8d02011ae6d6cd Oct 27 08:24:13.134047 containerd[1628]: 2025-10-27 08:24:13.075 [INFO][4395] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.51964c1b58207ff626483e0a532b4e31818cdb032d2e240f5d8d02011ae6d6cd" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:13.134047 containerd[1628]: 2025-10-27 08:24:13.086 [INFO][4395] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.197/26] block=192.168.122.192/26 handle="k8s-pod-network.51964c1b58207ff626483e0a532b4e31818cdb032d2e240f5d8d02011ae6d6cd" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:13.134047 containerd[1628]: 2025-10-27 08:24:13.086 [INFO][4395] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.197/26] handle="k8s-pod-network.51964c1b58207ff626483e0a532b4e31818cdb032d2e240f5d8d02011ae6d6cd" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:13.134047 containerd[1628]: 2025-10-27 08:24:13.086 [INFO][4395] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:24:13.134047 containerd[1628]: 2025-10-27 08:24:13.086 [INFO][4395] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.197/26] IPv6=[] ContainerID="51964c1b58207ff626483e0a532b4e31818cdb032d2e240f5d8d02011ae6d6cd" HandleID="k8s-pod-network.51964c1b58207ff626483e0a532b4e31818cdb032d2e240f5d8d02011ae6d6cd" Workload="ci--9999--9--9--k--f136f833c6-k8s-calico--apiserver--68d8c5c9bc--jsc7m-eth0" Oct 27 08:24:13.134998 containerd[1628]: 2025-10-27 08:24:13.089 [INFO][4366] cni-plugin/k8s.go 418: Populated endpoint ContainerID="51964c1b58207ff626483e0a532b4e31818cdb032d2e240f5d8d02011ae6d6cd" Namespace="calico-apiserver" Pod="calico-apiserver-68d8c5c9bc-jsc7m" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-calico--apiserver--68d8c5c9bc--jsc7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--9--k--f136f833c6-k8s-calico--apiserver--68d8c5c9bc--jsc7m-eth0", GenerateName:"calico-apiserver-68d8c5c9bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"96a60c22-8a13-49d1-8749-b73cb7e464a7", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 23, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68d8c5c9bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-9-k-f136f833c6", ContainerID:"", Pod:"calico-apiserver-68d8c5c9bc-jsc7m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0c3fbba1f3f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:24:13.134998 containerd[1628]: 2025-10-27 08:24:13.089 [INFO][4366] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.197/32] ContainerID="51964c1b58207ff626483e0a532b4e31818cdb032d2e240f5d8d02011ae6d6cd" Namespace="calico-apiserver" Pod="calico-apiserver-68d8c5c9bc-jsc7m" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-calico--apiserver--68d8c5c9bc--jsc7m-eth0" Oct 27 08:24:13.134998 containerd[1628]: 2025-10-27 08:24:13.089 [INFO][4366] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0c3fbba1f3f ContainerID="51964c1b58207ff626483e0a532b4e31818cdb032d2e240f5d8d02011ae6d6cd" Namespace="calico-apiserver" Pod="calico-apiserver-68d8c5c9bc-jsc7m" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-calico--apiserver--68d8c5c9bc--jsc7m-eth0" Oct 27 08:24:13.134998 containerd[1628]: 2025-10-27 08:24:13.096 [INFO][4366] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="51964c1b58207ff626483e0a532b4e31818cdb032d2e240f5d8d02011ae6d6cd" Namespace="calico-apiserver" Pod="calico-apiserver-68d8c5c9bc-jsc7m" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-calico--apiserver--68d8c5c9bc--jsc7m-eth0" Oct 27 08:24:13.134998 containerd[1628]: 2025-10-27 08:24:13.099 [INFO][4366] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="51964c1b58207ff626483e0a532b4e31818cdb032d2e240f5d8d02011ae6d6cd" Namespace="calico-apiserver" Pod="calico-apiserver-68d8c5c9bc-jsc7m" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-calico--apiserver--68d8c5c9bc--jsc7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--9--k--f136f833c6-k8s-calico--apiserver--68d8c5c9bc--jsc7m-eth0", GenerateName:"calico-apiserver-68d8c5c9bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"96a60c22-8a13-49d1-8749-b73cb7e464a7", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 23, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68d8c5c9bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-9-k-f136f833c6", ContainerID:"51964c1b58207ff626483e0a532b4e31818cdb032d2e240f5d8d02011ae6d6cd", Pod:"calico-apiserver-68d8c5c9bc-jsc7m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0c3fbba1f3f", MAC:"c6:08:0d:0d:09:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:24:13.134998 containerd[1628]: 2025-10-27 08:24:13.118 [INFO][4366] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="51964c1b58207ff626483e0a532b4e31818cdb032d2e240f5d8d02011ae6d6cd" Namespace="calico-apiserver" Pod="calico-apiserver-68d8c5c9bc-jsc7m" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-calico--apiserver--68d8c5c9bc--jsc7m-eth0" Oct 27 08:24:13.162860 kubelet[2788]: E1027 08:24:13.162781 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b4b456d6b-4jfhq" podUID="766fb522-b8e8-496d-9871-210f41ee5bf3" Oct 27 08:24:13.182117 containerd[1628]: time="2025-10-27T08:24:13.181984072Z" level=info msg="connecting to shim 51964c1b58207ff626483e0a532b4e31818cdb032d2e240f5d8d02011ae6d6cd" address="unix:///run/containerd/s/50110c724272fcabb1bca697d8f0b6102a918c0930f1292cc148377ed26fbf3a" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:24:13.222600 systemd[1]: Started cri-containerd-51964c1b58207ff626483e0a532b4e31818cdb032d2e240f5d8d02011ae6d6cd.scope - libcontainer container 51964c1b58207ff626483e0a532b4e31818cdb032d2e240f5d8d02011ae6d6cd. Oct 27 08:24:13.236243 kubelet[2788]: I1027 08:24:13.227939 2788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-tphb5" podStartSLOduration=45.227922362 podStartE2EDuration="45.227922362s" podCreationTimestamp="2025-10-27 08:23:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:24:13.198171055 +0000 UTC m=+52.532908382" watchObservedRunningTime="2025-10-27 08:24:13.227922362 +0000 UTC m=+52.562659699" Oct 27 08:24:13.278664 containerd[1628]: time="2025-10-27T08:24:13.278598274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74d68549b8-grhgf,Uid:e5f8aee0-010a-43df-b3cc-29e7716b4073,Namespace:calico-system,Attempt:0,} returns sandbox id \"f379ba86dfba33f033010beba04512dc2c262665f7d181a10cd9bb0f1a9f5bb3\"" Oct 27 08:24:13.281635 containerd[1628]: time="2025-10-27T08:24:13.281526029Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:24:13.282989 containerd[1628]: time="2025-10-27T08:24:13.282781105Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 27 08:24:13.283234 containerd[1628]: time="2025-10-27T08:24:13.282973069Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 27 08:24:13.283504 kubelet[2788]: E1027 08:24:13.283333 2788 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:24:13.283655 kubelet[2788]: E1027 08:24:13.283365 2788 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:24:13.284084 kubelet[2788]: E1027 08:24:13.283956 2788 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-wd8vm_calico-system(79766d3c-55af-44b2-853b-a76f9b90d865): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 27 08:24:13.284212 kubelet[2788]: E1027 08:24:13.284037 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wd8vm" podUID="79766d3c-55af-44b2-853b-a76f9b90d865" Oct 27 08:24:13.284279 containerd[1628]: time="2025-10-27T08:24:13.283869546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 27 08:24:13.325203 containerd[1628]: time="2025-10-27T08:24:13.325146231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68d8c5c9bc-jsc7m,Uid:96a60c22-8a13-49d1-8749-b73cb7e464a7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"51964c1b58207ff626483e0a532b4e31818cdb032d2e240f5d8d02011ae6d6cd\"" Oct 27 08:24:13.718294 containerd[1628]: time="2025-10-27T08:24:13.718223488Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:24:13.719587 containerd[1628]: time="2025-10-27T08:24:13.719531456Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 27 08:24:13.719587 containerd[1628]: time="2025-10-27T08:24:13.719560733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 27 08:24:13.719989 kubelet[2788]: E1027 08:24:13.719768 2788 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:24:13.719989 kubelet[2788]: E1027 08:24:13.719840 2788 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:24:13.720172 containerd[1628]: time="2025-10-27T08:24:13.720143158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:24:13.726378 kubelet[2788]: E1027 08:24:13.726320 2788 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-74d68549b8-grhgf_calico-system(e5f8aee0-010a-43df-b3cc-29e7716b4073): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 27 08:24:13.726547 kubelet[2788]: E1027 08:24:13.726392 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74d68549b8-grhgf" podUID="e5f8aee0-010a-43df-b3cc-29e7716b4073" Oct 27 08:24:13.753082 systemd-networkd[1521]: calif9b13b4ba89: Gained IPv6LL Oct 27 08:24:13.944967 systemd-networkd[1521]: caliaed6ee5a9d9: Gained IPv6LL Oct 27 08:24:14.153819 kubelet[2788]: E1027 08:24:14.153608 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wd8vm" podUID="79766d3c-55af-44b2-853b-a76f9b90d865" Oct 27 08:24:14.153819 kubelet[2788]: E1027 08:24:14.153757 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74d68549b8-grhgf" podUID="e5f8aee0-010a-43df-b3cc-29e7716b4073" Oct 27 08:24:14.155144 containerd[1628]: time="2025-10-27T08:24:14.155118120Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:24:14.156713 containerd[1628]: time="2025-10-27T08:24:14.156639407Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:24:14.156713 containerd[1628]: time="2025-10-27T08:24:14.156681939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:24:14.156923 kubelet[2788]: E1027 08:24:14.156807 2788 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:24:14.156923 kubelet[2788]: E1027 08:24:14.156831 2788 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:24:14.156923 kubelet[2788]: E1027 08:24:14.156888 2788 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-68d8c5c9bc-jsc7m_calico-apiserver(96a60c22-8a13-49d1-8749-b73cb7e464a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:24:14.156923 kubelet[2788]: E1027 08:24:14.156915 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68d8c5c9bc-jsc7m" podUID="96a60c22-8a13-49d1-8749-b73cb7e464a7" Oct 27 08:24:14.456841 systemd-networkd[1521]: calidcb37f17bfe: Gained IPv6LL Oct 27 08:24:14.788954 containerd[1628]: time="2025-10-27T08:24:14.788711052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s6rbz,Uid:1b761e29-b614-4041-93ad-3a2beca6983c,Namespace:calico-system,Attempt:0,}" Oct 27 08:24:14.800707 containerd[1628]: time="2025-10-27T08:24:14.800672546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68d8c5c9bc-f56tm,Uid:1bcea6e5-3c39-41c9-92bc-ee324a63b0a8,Namespace:calico-apiserver,Attempt:0,}" Oct 27 08:24:14.945293 systemd-networkd[1521]: caliab46250ecc1: Link UP Oct 27 08:24:14.945626 systemd-networkd[1521]: caliab46250ecc1: Gained carrier Oct 27 08:24:14.960399 containerd[1628]: 2025-10-27 08:24:14.873 [INFO][4527] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999--9--9--k--f136f833c6-k8s-calico--apiserver--68d8c5c9bc--f56tm-eth0 calico-apiserver-68d8c5c9bc- calico-apiserver 1bcea6e5-3c39-41c9-92bc-ee324a63b0a8 812 0 2025-10-27 08:23:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68d8c5c9bc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-9999-9-9-k-f136f833c6 calico-apiserver-68d8c5c9bc-f56tm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliab46250ecc1 [] [] }} ContainerID="37e9cf19a7dfbc85f11a94dfdaf52c7b7fdeb4d37962391a9e4bbd02602582ef" Namespace="calico-apiserver" Pod="calico-apiserver-68d8c5c9bc-f56tm" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-calico--apiserver--68d8c5c9bc--f56tm-" Oct 27 08:24:14.960399 containerd[1628]: 2025-10-27 08:24:14.875 [INFO][4527] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="37e9cf19a7dfbc85f11a94dfdaf52c7b7fdeb4d37962391a9e4bbd02602582ef" Namespace="calico-apiserver" Pod="calico-apiserver-68d8c5c9bc-f56tm" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-calico--apiserver--68d8c5c9bc--f56tm-eth0" Oct 27 08:24:14.960399 containerd[1628]: 2025-10-27 08:24:14.900 [INFO][4543] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="37e9cf19a7dfbc85f11a94dfdaf52c7b7fdeb4d37962391a9e4bbd02602582ef" HandleID="k8s-pod-network.37e9cf19a7dfbc85f11a94dfdaf52c7b7fdeb4d37962391a9e4bbd02602582ef" Workload="ci--9999--9--9--k--f136f833c6-k8s-calico--apiserver--68d8c5c9bc--f56tm-eth0" Oct 27 08:24:14.960399 containerd[1628]: 2025-10-27 08:24:14.901 [INFO][4543] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="37e9cf19a7dfbc85f11a94dfdaf52c7b7fdeb4d37962391a9e4bbd02602582ef" HandleID="k8s-pod-network.37e9cf19a7dfbc85f11a94dfdaf52c7b7fdeb4d37962391a9e4bbd02602582ef" Workload="ci--9999--9--9--k--f136f833c6-k8s-calico--apiserver--68d8c5c9bc--f56tm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-9999-9-9-k-f136f833c6", "pod":"calico-apiserver-68d8c5c9bc-f56tm", "timestamp":"2025-10-27 08:24:14.900722783 +0000 UTC"}, Hostname:"ci-9999-9-9-k-f136f833c6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:24:14.960399 containerd[1628]: 2025-10-27 08:24:14.901 [INFO][4543] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:24:14.960399 containerd[1628]: 2025-10-27 08:24:14.901 [INFO][4543] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:24:14.960399 containerd[1628]: 2025-10-27 08:24:14.901 [INFO][4543] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999-9-9-k-f136f833c6' Oct 27 08:24:14.960399 containerd[1628]: 2025-10-27 08:24:14.908 [INFO][4543] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.37e9cf19a7dfbc85f11a94dfdaf52c7b7fdeb4d37962391a9e4bbd02602582ef" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:14.960399 containerd[1628]: 2025-10-27 08:24:14.914 [INFO][4543] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:14.960399 containerd[1628]: 2025-10-27 08:24:14.920 [INFO][4543] ipam/ipam.go 511: Trying affinity for 192.168.122.192/26 host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:14.960399 containerd[1628]: 2025-10-27 08:24:14.922 [INFO][4543] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.192/26 host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:14.960399 containerd[1628]: 2025-10-27 08:24:14.924 [INFO][4543] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:14.960399 containerd[1628]: 2025-10-27 08:24:14.924 [INFO][4543] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.37e9cf19a7dfbc85f11a94dfdaf52c7b7fdeb4d37962391a9e4bbd02602582ef" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:14.960399 containerd[1628]: 2025-10-27 08:24:14.926 [INFO][4543] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.37e9cf19a7dfbc85f11a94dfdaf52c7b7fdeb4d37962391a9e4bbd02602582ef Oct 27 08:24:14.960399 containerd[1628]: 2025-10-27 08:24:14.931 [INFO][4543] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.37e9cf19a7dfbc85f11a94dfdaf52c7b7fdeb4d37962391a9e4bbd02602582ef" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:14.960399 containerd[1628]: 2025-10-27 08:24:14.937 [INFO][4543] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.198/26] block=192.168.122.192/26 handle="k8s-pod-network.37e9cf19a7dfbc85f11a94dfdaf52c7b7fdeb4d37962391a9e4bbd02602582ef" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:14.960399 containerd[1628]: 2025-10-27 08:24:14.937 [INFO][4543] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.198/26] handle="k8s-pod-network.37e9cf19a7dfbc85f11a94dfdaf52c7b7fdeb4d37962391a9e4bbd02602582ef" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:14.960399 containerd[1628]: 2025-10-27 08:24:14.937 [INFO][4543] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:24:14.960399 containerd[1628]: 2025-10-27 08:24:14.938 [INFO][4543] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.198/26] IPv6=[] ContainerID="37e9cf19a7dfbc85f11a94dfdaf52c7b7fdeb4d37962391a9e4bbd02602582ef" HandleID="k8s-pod-network.37e9cf19a7dfbc85f11a94dfdaf52c7b7fdeb4d37962391a9e4bbd02602582ef" Workload="ci--9999--9--9--k--f136f833c6-k8s-calico--apiserver--68d8c5c9bc--f56tm-eth0" Oct 27 08:24:14.961669 containerd[1628]: 2025-10-27 08:24:14.942 [INFO][4527] cni-plugin/k8s.go 418: Populated endpoint ContainerID="37e9cf19a7dfbc85f11a94dfdaf52c7b7fdeb4d37962391a9e4bbd02602582ef" Namespace="calico-apiserver" Pod="calico-apiserver-68d8c5c9bc-f56tm" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-calico--apiserver--68d8c5c9bc--f56tm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--9--k--f136f833c6-k8s-calico--apiserver--68d8c5c9bc--f56tm-eth0", GenerateName:"calico-apiserver-68d8c5c9bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"1bcea6e5-3c39-41c9-92bc-ee324a63b0a8", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 23, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68d8c5c9bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-9-k-f136f833c6", ContainerID:"", Pod:"calico-apiserver-68d8c5c9bc-f56tm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliab46250ecc1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:24:14.961669 containerd[1628]: 2025-10-27 08:24:14.942 [INFO][4527] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.198/32] ContainerID="37e9cf19a7dfbc85f11a94dfdaf52c7b7fdeb4d37962391a9e4bbd02602582ef" Namespace="calico-apiserver" Pod="calico-apiserver-68d8c5c9bc-f56tm" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-calico--apiserver--68d8c5c9bc--f56tm-eth0" Oct 27 08:24:14.961669 containerd[1628]: 2025-10-27 08:24:14.942 [INFO][4527] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab46250ecc1 ContainerID="37e9cf19a7dfbc85f11a94dfdaf52c7b7fdeb4d37962391a9e4bbd02602582ef" Namespace="calico-apiserver" Pod="calico-apiserver-68d8c5c9bc-f56tm" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-calico--apiserver--68d8c5c9bc--f56tm-eth0" Oct 27 08:24:14.961669 containerd[1628]: 2025-10-27 08:24:14.945 [INFO][4527] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="37e9cf19a7dfbc85f11a94dfdaf52c7b7fdeb4d37962391a9e4bbd02602582ef" Namespace="calico-apiserver" Pod="calico-apiserver-68d8c5c9bc-f56tm" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-calico--apiserver--68d8c5c9bc--f56tm-eth0" Oct 27 08:24:14.961669 containerd[1628]: 2025-10-27 08:24:14.945 [INFO][4527] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="37e9cf19a7dfbc85f11a94dfdaf52c7b7fdeb4d37962391a9e4bbd02602582ef" Namespace="calico-apiserver" Pod="calico-apiserver-68d8c5c9bc-f56tm" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-calico--apiserver--68d8c5c9bc--f56tm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--9--k--f136f833c6-k8s-calico--apiserver--68d8c5c9bc--f56tm-eth0", GenerateName:"calico-apiserver-68d8c5c9bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"1bcea6e5-3c39-41c9-92bc-ee324a63b0a8", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 23, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68d8c5c9bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-9-k-f136f833c6", ContainerID:"37e9cf19a7dfbc85f11a94dfdaf52c7b7fdeb4d37962391a9e4bbd02602582ef", Pod:"calico-apiserver-68d8c5c9bc-f56tm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliab46250ecc1", MAC:"f2:29:69:da:d1:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:24:14.961669 containerd[1628]: 2025-10-27 08:24:14.956 [INFO][4527] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="37e9cf19a7dfbc85f11a94dfdaf52c7b7fdeb4d37962391a9e4bbd02602582ef" Namespace="calico-apiserver" Pod="calico-apiserver-68d8c5c9bc-f56tm" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-calico--apiserver--68d8c5c9bc--f56tm-eth0" Oct 27 08:24:14.968640 systemd-networkd[1521]: cali0c3fbba1f3f: Gained IPv6LL Oct 27 08:24:14.986082 containerd[1628]: time="2025-10-27T08:24:14.986055418Z" level=info msg="connecting to shim 37e9cf19a7dfbc85f11a94dfdaf52c7b7fdeb4d37962391a9e4bbd02602582ef" address="unix:///run/containerd/s/5c0a8056033f2e421f45bf0bebd8a42ecf2c68e5b2f7195da4d0e6bf2029874b" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:24:15.007257 systemd[1]: Started cri-containerd-37e9cf19a7dfbc85f11a94dfdaf52c7b7fdeb4d37962391a9e4bbd02602582ef.scope - libcontainer container 37e9cf19a7dfbc85f11a94dfdaf52c7b7fdeb4d37962391a9e4bbd02602582ef. Oct 27 08:24:15.057826 systemd-networkd[1521]: calia356648bd47: Link UP Oct 27 08:24:15.062249 systemd-networkd[1521]: calia356648bd47: Gained carrier Oct 27 08:24:15.080528 containerd[1628]: 2025-10-27 08:24:14.874 [INFO][4517] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999--9--9--k--f136f833c6-k8s-csi--node--driver--s6rbz-eth0 csi-node-driver- calico-system 1b761e29-b614-4041-93ad-3a2beca6983c 707 0 2025-10-27 08:23:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-9999-9-9-k-f136f833c6 csi-node-driver-s6rbz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia356648bd47 [] [] }} ContainerID="ccebc0907e26cb895f2063eb8a71817f492ffd721156a4b3c618d997b64462dc" Namespace="calico-system" Pod="csi-node-driver-s6rbz" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-csi--node--driver--s6rbz-" Oct 27 08:24:15.080528 containerd[1628]: 2025-10-27 08:24:14.874 [INFO][4517] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ccebc0907e26cb895f2063eb8a71817f492ffd721156a4b3c618d997b64462dc" Namespace="calico-system" Pod="csi-node-driver-s6rbz" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-csi--node--driver--s6rbz-eth0" Oct 27 08:24:15.080528 containerd[1628]: 2025-10-27 08:24:14.910 [INFO][4544] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ccebc0907e26cb895f2063eb8a71817f492ffd721156a4b3c618d997b64462dc" HandleID="k8s-pod-network.ccebc0907e26cb895f2063eb8a71817f492ffd721156a4b3c618d997b64462dc" Workload="ci--9999--9--9--k--f136f833c6-k8s-csi--node--driver--s6rbz-eth0" Oct 27 08:24:15.080528 containerd[1628]: 2025-10-27 08:24:14.911 [INFO][4544] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ccebc0907e26cb895f2063eb8a71817f492ffd721156a4b3c618d997b64462dc" HandleID="k8s-pod-network.ccebc0907e26cb895f2063eb8a71817f492ffd721156a4b3c618d997b64462dc" Workload="ci--9999--9--9--k--f136f833c6-k8s-csi--node--driver--s6rbz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-9999-9-9-k-f136f833c6", "pod":"csi-node-driver-s6rbz", "timestamp":"2025-10-27 08:24:14.910957841 +0000 UTC"}, Hostname:"ci-9999-9-9-k-f136f833c6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:24:15.080528 containerd[1628]: 2025-10-27 08:24:14.911 [INFO][4544] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:24:15.080528 containerd[1628]: 2025-10-27 08:24:14.937 [INFO][4544] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:24:15.080528 containerd[1628]: 2025-10-27 08:24:14.938 [INFO][4544] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999-9-9-k-f136f833c6' Oct 27 08:24:15.080528 containerd[1628]: 2025-10-27 08:24:15.010 [INFO][4544] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ccebc0907e26cb895f2063eb8a71817f492ffd721156a4b3c618d997b64462dc" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:15.080528 containerd[1628]: 2025-10-27 08:24:15.019 [INFO][4544] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:15.080528 containerd[1628]: 2025-10-27 08:24:15.027 [INFO][4544] ipam/ipam.go 511: Trying affinity for 192.168.122.192/26 host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:15.080528 containerd[1628]: 2025-10-27 08:24:15.029 [INFO][4544] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.192/26 host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:15.080528 containerd[1628]: 2025-10-27 08:24:15.032 [INFO][4544] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:15.080528 containerd[1628]: 2025-10-27 08:24:15.033 [INFO][4544] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.ccebc0907e26cb895f2063eb8a71817f492ffd721156a4b3c618d997b64462dc" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:15.080528 containerd[1628]: 2025-10-27 08:24:15.034 [INFO][4544] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ccebc0907e26cb895f2063eb8a71817f492ffd721156a4b3c618d997b64462dc Oct 27 08:24:15.080528 containerd[1628]: 2025-10-27 08:24:15.040 [INFO][4544] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.ccebc0907e26cb895f2063eb8a71817f492ffd721156a4b3c618d997b64462dc" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:15.080528 containerd[1628]: 2025-10-27 08:24:15.049 [INFO][4544] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.199/26] block=192.168.122.192/26 handle="k8s-pod-network.ccebc0907e26cb895f2063eb8a71817f492ffd721156a4b3c618d997b64462dc" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:15.080528 containerd[1628]: 2025-10-27 08:24:15.049 [INFO][4544] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.199/26] handle="k8s-pod-network.ccebc0907e26cb895f2063eb8a71817f492ffd721156a4b3c618d997b64462dc" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:15.080528 containerd[1628]: 2025-10-27 08:24:15.049 [INFO][4544] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:24:15.080528 containerd[1628]: 2025-10-27 08:24:15.049 [INFO][4544] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.199/26] IPv6=[] ContainerID="ccebc0907e26cb895f2063eb8a71817f492ffd721156a4b3c618d997b64462dc" HandleID="k8s-pod-network.ccebc0907e26cb895f2063eb8a71817f492ffd721156a4b3c618d997b64462dc" Workload="ci--9999--9--9--k--f136f833c6-k8s-csi--node--driver--s6rbz-eth0" Oct 27 08:24:15.081339 containerd[1628]: 2025-10-27 08:24:15.054 [INFO][4517] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ccebc0907e26cb895f2063eb8a71817f492ffd721156a4b3c618d997b64462dc" Namespace="calico-system" Pod="csi-node-driver-s6rbz" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-csi--node--driver--s6rbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--9--k--f136f833c6-k8s-csi--node--driver--s6rbz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1b761e29-b614-4041-93ad-3a2beca6983c", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 23, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-9-k-f136f833c6", ContainerID:"", Pod:"csi-node-driver-s6rbz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.122.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia356648bd47", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:24:15.081339 containerd[1628]: 2025-10-27 08:24:15.055 [INFO][4517] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.199/32] ContainerID="ccebc0907e26cb895f2063eb8a71817f492ffd721156a4b3c618d997b64462dc" Namespace="calico-system" Pod="csi-node-driver-s6rbz" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-csi--node--driver--s6rbz-eth0" Oct 27 08:24:15.081339 containerd[1628]: 2025-10-27 08:24:15.055 [INFO][4517] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia356648bd47 ContainerID="ccebc0907e26cb895f2063eb8a71817f492ffd721156a4b3c618d997b64462dc" Namespace="calico-system" Pod="csi-node-driver-s6rbz" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-csi--node--driver--s6rbz-eth0" Oct 27 08:24:15.081339 containerd[1628]: 2025-10-27 08:24:15.059 [INFO][4517] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ccebc0907e26cb895f2063eb8a71817f492ffd721156a4b3c618d997b64462dc" Namespace="calico-system" Pod="csi-node-driver-s6rbz" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-csi--node--driver--s6rbz-eth0" Oct 27 08:24:15.081339 containerd[1628]: 2025-10-27 08:24:15.061 [INFO][4517] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ccebc0907e26cb895f2063eb8a71817f492ffd721156a4b3c618d997b64462dc" Namespace="calico-system" Pod="csi-node-driver-s6rbz" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-csi--node--driver--s6rbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--9--k--f136f833c6-k8s-csi--node--driver--s6rbz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1b761e29-b614-4041-93ad-3a2beca6983c", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 23, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-9-k-f136f833c6", ContainerID:"ccebc0907e26cb895f2063eb8a71817f492ffd721156a4b3c618d997b64462dc", Pod:"csi-node-driver-s6rbz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.122.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia356648bd47", MAC:"72:3a:a1:58:e2:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:24:15.081339 containerd[1628]: 2025-10-27 08:24:15.074 [INFO][4517] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ccebc0907e26cb895f2063eb8a71817f492ffd721156a4b3c618d997b64462dc" Namespace="calico-system" Pod="csi-node-driver-s6rbz" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-csi--node--driver--s6rbz-eth0" Oct 27 08:24:15.111789 containerd[1628]: time="2025-10-27T08:24:15.111670776Z" level=info msg="connecting to shim ccebc0907e26cb895f2063eb8a71817f492ffd721156a4b3c618d997b64462dc" address="unix:///run/containerd/s/18beca7f18bc23795fecd774324ceec7ef08475de3bd5bb3d62ae1aba8030eab" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:24:15.117637 containerd[1628]: time="2025-10-27T08:24:15.117554659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68d8c5c9bc-f56tm,Uid:1bcea6e5-3c39-41c9-92bc-ee324a63b0a8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"37e9cf19a7dfbc85f11a94dfdaf52c7b7fdeb4d37962391a9e4bbd02602582ef\"" Oct 27 08:24:15.120410 containerd[1628]: time="2025-10-27T08:24:15.120126759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:24:15.145672 systemd[1]: Started cri-containerd-ccebc0907e26cb895f2063eb8a71817f492ffd721156a4b3c618d997b64462dc.scope - libcontainer container ccebc0907e26cb895f2063eb8a71817f492ffd721156a4b3c618d997b64462dc. Oct 27 08:24:15.161123 kubelet[2788]: E1027 08:24:15.161080 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74d68549b8-grhgf" podUID="e5f8aee0-010a-43df-b3cc-29e7716b4073" Oct 27 08:24:15.163722 kubelet[2788]: E1027 08:24:15.163690 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68d8c5c9bc-jsc7m" podUID="96a60c22-8a13-49d1-8749-b73cb7e464a7" Oct 27 08:24:15.178077 containerd[1628]: time="2025-10-27T08:24:15.177643358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s6rbz,Uid:1b761e29-b614-4041-93ad-3a2beca6983c,Namespace:calico-system,Attempt:0,} returns sandbox id \"ccebc0907e26cb895f2063eb8a71817f492ffd721156a4b3c618d997b64462dc\"" Oct 27 08:24:15.568640 containerd[1628]: time="2025-10-27T08:24:15.568596291Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:24:15.570066 containerd[1628]: time="2025-10-27T08:24:15.569818953Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:24:15.570261 containerd[1628]: time="2025-10-27T08:24:15.569841888Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:24:15.570542 kubelet[2788]: E1027 08:24:15.570477 2788 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:24:15.570542 kubelet[2788]: E1027 08:24:15.570523 2788 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:24:15.570904 kubelet[2788]: E1027 08:24:15.570813 2788 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-68d8c5c9bc-f56tm_calico-apiserver(1bcea6e5-3c39-41c9-92bc-ee324a63b0a8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:24:15.570904 kubelet[2788]: E1027 08:24:15.570850 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68d8c5c9bc-f56tm" podUID="1bcea6e5-3c39-41c9-92bc-ee324a63b0a8" Oct 27 08:24:15.571505 containerd[1628]: time="2025-10-27T08:24:15.571484053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 27 08:24:15.788755 containerd[1628]: time="2025-10-27T08:24:15.788684303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lwwn5,Uid:53ab1dbd-3950-4a90-ad09-9df752a49a33,Namespace:kube-system,Attempt:0,}" Oct 27 08:24:15.914415 systemd-networkd[1521]: cali21523597fa1: Link UP Oct 27 08:24:15.914655 systemd-networkd[1521]: cali21523597fa1: Gained carrier Oct 27 08:24:15.926560 containerd[1628]: 2025-10-27 08:24:15.843 [INFO][4666] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--9999--9--9--k--f136f833c6-k8s-coredns--66bc5c9577--lwwn5-eth0 coredns-66bc5c9577- kube-system 53ab1dbd-3950-4a90-ad09-9df752a49a33 813 0 2025-10-27 08:23:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-9999-9-9-k-f136f833c6 coredns-66bc5c9577-lwwn5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali21523597fa1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="e6bf87b132b9f3b42ff73ea50c65934be37e3d1892f19d5358a8917019409b8a" Namespace="kube-system" Pod="coredns-66bc5c9577-lwwn5" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-coredns--66bc5c9577--lwwn5-" Oct 27 08:24:15.926560 containerd[1628]: 2025-10-27 08:24:15.843 [INFO][4666] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e6bf87b132b9f3b42ff73ea50c65934be37e3d1892f19d5358a8917019409b8a" Namespace="kube-system" Pod="coredns-66bc5c9577-lwwn5" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-coredns--66bc5c9577--lwwn5-eth0" Oct 27 08:24:15.926560 containerd[1628]: 2025-10-27 08:24:15.873 [INFO][4677] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6bf87b132b9f3b42ff73ea50c65934be37e3d1892f19d5358a8917019409b8a" HandleID="k8s-pod-network.e6bf87b132b9f3b42ff73ea50c65934be37e3d1892f19d5358a8917019409b8a" Workload="ci--9999--9--9--k--f136f833c6-k8s-coredns--66bc5c9577--lwwn5-eth0" Oct 27 08:24:15.926560 containerd[1628]: 2025-10-27 08:24:15.873 [INFO][4677] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e6bf87b132b9f3b42ff73ea50c65934be37e3d1892f19d5358a8917019409b8a" HandleID="k8s-pod-network.e6bf87b132b9f3b42ff73ea50c65934be37e3d1892f19d5358a8917019409b8a" Workload="ci--9999--9--9--k--f136f833c6-k8s-coredns--66bc5c9577--lwwn5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-9999-9-9-k-f136f833c6", "pod":"coredns-66bc5c9577-lwwn5", "timestamp":"2025-10-27 08:24:15.873581255 +0000 UTC"}, Hostname:"ci-9999-9-9-k-f136f833c6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 27 08:24:15.926560 containerd[1628]: 2025-10-27 08:24:15.874 [INFO][4677] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 27 08:24:15.926560 containerd[1628]: 2025-10-27 08:24:15.874 [INFO][4677] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 27 08:24:15.926560 containerd[1628]: 2025-10-27 08:24:15.874 [INFO][4677] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-9999-9-9-k-f136f833c6' Oct 27 08:24:15.926560 containerd[1628]: 2025-10-27 08:24:15.881 [INFO][4677] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e6bf87b132b9f3b42ff73ea50c65934be37e3d1892f19d5358a8917019409b8a" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:15.926560 containerd[1628]: 2025-10-27 08:24:15.886 [INFO][4677] ipam/ipam.go 394: Looking up existing affinities for host host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:15.926560 containerd[1628]: 2025-10-27 08:24:15.891 [INFO][4677] ipam/ipam.go 511: Trying affinity for 192.168.122.192/26 host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:15.926560 containerd[1628]: 2025-10-27 08:24:15.892 [INFO][4677] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.192/26 host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:15.926560 containerd[1628]: 2025-10-27 08:24:15.894 [INFO][4677] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:15.926560 containerd[1628]: 2025-10-27 08:24:15.894 [INFO][4677] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.e6bf87b132b9f3b42ff73ea50c65934be37e3d1892f19d5358a8917019409b8a" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:15.926560 containerd[1628]: 2025-10-27 08:24:15.896 [INFO][4677] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e6bf87b132b9f3b42ff73ea50c65934be37e3d1892f19d5358a8917019409b8a Oct 27 08:24:15.926560 containerd[1628]: 2025-10-27 08:24:15.903 [INFO][4677] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.e6bf87b132b9f3b42ff73ea50c65934be37e3d1892f19d5358a8917019409b8a" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:15.926560 containerd[1628]: 2025-10-27 08:24:15.909 [INFO][4677] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.200/26] block=192.168.122.192/26 handle="k8s-pod-network.e6bf87b132b9f3b42ff73ea50c65934be37e3d1892f19d5358a8917019409b8a" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:15.926560 containerd[1628]: 2025-10-27 08:24:15.909 [INFO][4677] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.200/26] handle="k8s-pod-network.e6bf87b132b9f3b42ff73ea50c65934be37e3d1892f19d5358a8917019409b8a" host="ci-9999-9-9-k-f136f833c6" Oct 27 08:24:15.926560 containerd[1628]: 2025-10-27 08:24:15.909 [INFO][4677] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 27 08:24:15.926560 containerd[1628]: 2025-10-27 08:24:15.909 [INFO][4677] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.200/26] IPv6=[] ContainerID="e6bf87b132b9f3b42ff73ea50c65934be37e3d1892f19d5358a8917019409b8a" HandleID="k8s-pod-network.e6bf87b132b9f3b42ff73ea50c65934be37e3d1892f19d5358a8917019409b8a" Workload="ci--9999--9--9--k--f136f833c6-k8s-coredns--66bc5c9577--lwwn5-eth0" Oct 27 08:24:15.927592 containerd[1628]: 2025-10-27 08:24:15.911 [INFO][4666] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e6bf87b132b9f3b42ff73ea50c65934be37e3d1892f19d5358a8917019409b8a" Namespace="kube-system" Pod="coredns-66bc5c9577-lwwn5" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-coredns--66bc5c9577--lwwn5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--9--k--f136f833c6-k8s-coredns--66bc5c9577--lwwn5-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"53ab1dbd-3950-4a90-ad09-9df752a49a33", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 23, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-9-k-f136f833c6", ContainerID:"", Pod:"coredns-66bc5c9577-lwwn5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali21523597fa1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:24:15.927592 containerd[1628]: 2025-10-27 08:24:15.911 [INFO][4666] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.200/32] ContainerID="e6bf87b132b9f3b42ff73ea50c65934be37e3d1892f19d5358a8917019409b8a" Namespace="kube-system" Pod="coredns-66bc5c9577-lwwn5" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-coredns--66bc5c9577--lwwn5-eth0" Oct 27 08:24:15.927592 containerd[1628]: 2025-10-27 08:24:15.911 [INFO][4666] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali21523597fa1 ContainerID="e6bf87b132b9f3b42ff73ea50c65934be37e3d1892f19d5358a8917019409b8a" Namespace="kube-system" Pod="coredns-66bc5c9577-lwwn5" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-coredns--66bc5c9577--lwwn5-eth0" Oct 27 08:24:15.927592 containerd[1628]: 2025-10-27 08:24:15.913 [INFO][4666] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e6bf87b132b9f3b42ff73ea50c65934be37e3d1892f19d5358a8917019409b8a" Namespace="kube-system" Pod="coredns-66bc5c9577-lwwn5" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-coredns--66bc5c9577--lwwn5-eth0" Oct 27 08:24:15.927592 containerd[1628]: 2025-10-27 08:24:15.914 [INFO][4666] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e6bf87b132b9f3b42ff73ea50c65934be37e3d1892f19d5358a8917019409b8a" Namespace="kube-system" Pod="coredns-66bc5c9577-lwwn5" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-coredns--66bc5c9577--lwwn5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--9999--9--9--k--f136f833c6-k8s-coredns--66bc5c9577--lwwn5-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"53ab1dbd-3950-4a90-ad09-9df752a49a33", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.October, 27, 8, 23, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-9999-9-9-k-f136f833c6", ContainerID:"e6bf87b132b9f3b42ff73ea50c65934be37e3d1892f19d5358a8917019409b8a", Pod:"coredns-66bc5c9577-lwwn5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali21523597fa1", MAC:"3e:78:be:b5:b5:6c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 27 08:24:15.927771 containerd[1628]: 2025-10-27 08:24:15.922 [INFO][4666] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e6bf87b132b9f3b42ff73ea50c65934be37e3d1892f19d5358a8917019409b8a" Namespace="kube-system" Pod="coredns-66bc5c9577-lwwn5" WorkloadEndpoint="ci--9999--9--9--k--f136f833c6-k8s-coredns--66bc5c9577--lwwn5-eth0" Oct 27 08:24:15.950696 containerd[1628]: time="2025-10-27T08:24:15.950515030Z" level=info msg="connecting to shim e6bf87b132b9f3b42ff73ea50c65934be37e3d1892f19d5358a8917019409b8a" address="unix:///run/containerd/s/6f731d9aaea7f7176440e10701652d3aa3f8cb838675adfa81747da5c007893c" namespace=k8s.io protocol=ttrpc version=3 Oct 27 08:24:15.985923 systemd[1]: Started cri-containerd-e6bf87b132b9f3b42ff73ea50c65934be37e3d1892f19d5358a8917019409b8a.scope - libcontainer container e6bf87b132b9f3b42ff73ea50c65934be37e3d1892f19d5358a8917019409b8a. Oct 27 08:24:16.040009 containerd[1628]: time="2025-10-27T08:24:16.039975574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lwwn5,Uid:53ab1dbd-3950-4a90-ad09-9df752a49a33,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6bf87b132b9f3b42ff73ea50c65934be37e3d1892f19d5358a8917019409b8a\"" Oct 27 08:24:16.044777 containerd[1628]: time="2025-10-27T08:24:16.044731099Z" level=info msg="CreateContainer within sandbox \"e6bf87b132b9f3b42ff73ea50c65934be37e3d1892f19d5358a8917019409b8a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 27 08:24:16.058418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount15531091.mount: Deactivated successfully. Oct 27 08:24:16.064264 containerd[1628]: time="2025-10-27T08:24:16.064229141Z" level=info msg="Container e5d74febff3fbc7840ae72932d0e8f1419de11803370f24db7f9bb8d33b998f1: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:24:16.073479 containerd[1628]: time="2025-10-27T08:24:16.073435584Z" level=info msg="CreateContainer within sandbox \"e6bf87b132b9f3b42ff73ea50c65934be37e3d1892f19d5358a8917019409b8a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e5d74febff3fbc7840ae72932d0e8f1419de11803370f24db7f9bb8d33b998f1\"" Oct 27 08:24:16.074344 containerd[1628]: time="2025-10-27T08:24:16.074316218Z" level=info msg="StartContainer for \"e5d74febff3fbc7840ae72932d0e8f1419de11803370f24db7f9bb8d33b998f1\"" Oct 27 08:24:16.075240 containerd[1628]: time="2025-10-27T08:24:16.075211050Z" level=info msg="connecting to shim e5d74febff3fbc7840ae72932d0e8f1419de11803370f24db7f9bb8d33b998f1" address="unix:///run/containerd/s/6f731d9aaea7f7176440e10701652d3aa3f8cb838675adfa81747da5c007893c" protocol=ttrpc version=3 Oct 27 08:24:16.093759 systemd[1]: Started cri-containerd-e5d74febff3fbc7840ae72932d0e8f1419de11803370f24db7f9bb8d33b998f1.scope - libcontainer container e5d74febff3fbc7840ae72932d0e8f1419de11803370f24db7f9bb8d33b998f1. Oct 27 08:24:16.129478 containerd[1628]: time="2025-10-27T08:24:16.129407270Z" level=info msg="StartContainer for \"e5d74febff3fbc7840ae72932d0e8f1419de11803370f24db7f9bb8d33b998f1\" returns successfully" Oct 27 08:24:16.140030 containerd[1628]: time="2025-10-27T08:24:16.139981479Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:24:16.140924 containerd[1628]: time="2025-10-27T08:24:16.140873816Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 27 08:24:16.141022 containerd[1628]: time="2025-10-27T08:24:16.140975313Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 27 08:24:16.141768 kubelet[2788]: E1027 08:24:16.141729 2788 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:24:16.141820 kubelet[2788]: E1027 08:24:16.141778 2788 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:24:16.141882 kubelet[2788]: E1027 08:24:16.141851 2788 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-s6rbz_calico-system(1b761e29-b614-4041-93ad-3a2beca6983c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 27 08:24:16.142850 containerd[1628]: time="2025-10-27T08:24:16.142812057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 27 08:24:16.168573 kubelet[2788]: E1027 08:24:16.167833 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68d8c5c9bc-f56tm" podUID="1bcea6e5-3c39-41c9-92bc-ee324a63b0a8" Oct 27 08:24:16.181075 kubelet[2788]: I1027 08:24:16.181013 2788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lwwn5" podStartSLOduration=48.180996874 podStartE2EDuration="48.180996874s" podCreationTimestamp="2025-10-27 08:23:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 08:24:16.180520823 +0000 UTC m=+55.515258179" watchObservedRunningTime="2025-10-27 08:24:16.180996874 +0000 UTC m=+55.515734211" Oct 27 08:24:16.186032 systemd-networkd[1521]: caliab46250ecc1: Gained IPv6LL Oct 27 08:24:16.574600 containerd[1628]: time="2025-10-27T08:24:16.574439024Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:24:16.575734 containerd[1628]: time="2025-10-27T08:24:16.575681129Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 27 08:24:16.575927 containerd[1628]: time="2025-10-27T08:24:16.575715466Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 27 08:24:16.576365 kubelet[2788]: E1027 08:24:16.576269 2788 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:24:16.577140 kubelet[2788]: E1027 08:24:16.577104 2788 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:24:16.577271 kubelet[2788]: E1027 08:24:16.577201 2788 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-s6rbz_calico-system(1b761e29-b614-4041-93ad-3a2beca6983c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 27 08:24:16.577271 kubelet[2788]: E1027 08:24:16.577243 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s6rbz" podUID="1b761e29-b614-4041-93ad-3a2beca6983c" Oct 27 08:24:16.824710 systemd-networkd[1521]: calia356648bd47: Gained IPv6LL Oct 27 08:24:17.080669 systemd-networkd[1521]: cali21523597fa1: Gained IPv6LL Oct 27 08:24:17.169614 kubelet[2788]: E1027 08:24:17.169517 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s6rbz" podUID="1b761e29-b614-4041-93ad-3a2beca6983c" Oct 27 08:24:18.057146 systemd[1]: Started sshd@11-46.62.164.160:22-14.103.173.90:52560.service - OpenSSH per-connection server daemon (14.103.173.90:52560). Oct 27 08:24:23.615838 systemd[1]: Started sshd@12-46.62.164.160:22-103.172.236.249:49384.service - OpenSSH per-connection server daemon (103.172.236.249:49384). Oct 27 08:24:25.007655 sshd[4797]: Received disconnect from 103.172.236.249 port 49384:11: Bye Bye [preauth] Oct 27 08:24:25.007655 sshd[4797]: Disconnected from authenticating user root 103.172.236.249 port 49384 [preauth] Oct 27 08:24:25.009366 systemd[1]: sshd@12-46.62.164.160:22-103.172.236.249:49384.service: Deactivated successfully. Oct 27 08:24:26.789709 containerd[1628]: time="2025-10-27T08:24:26.788965489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 27 08:24:27.270471 containerd[1628]: time="2025-10-27T08:24:27.270417110Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:24:27.271697 containerd[1628]: time="2025-10-27T08:24:27.271648105Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 27 08:24:27.271886 containerd[1628]: time="2025-10-27T08:24:27.271745431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 27 08:24:27.272641 kubelet[2788]: E1027 08:24:27.271971 2788 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:24:27.272641 kubelet[2788]: E1027 08:24:27.272027 2788 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:24:27.272641 kubelet[2788]: E1027 08:24:27.272330 2788 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-74d68549b8-grhgf_calico-system(e5f8aee0-010a-43df-b3cc-29e7716b4073): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 27 08:24:27.272641 kubelet[2788]: E1027 08:24:27.272409 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74d68549b8-grhgf" podUID="e5f8aee0-010a-43df-b3cc-29e7716b4073" Oct 27 08:24:27.273849 containerd[1628]: time="2025-10-27T08:24:27.273061908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:24:27.709998 containerd[1628]: time="2025-10-27T08:24:27.709941146Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:24:27.711185 containerd[1628]: time="2025-10-27T08:24:27.711125643Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:24:27.711320 containerd[1628]: time="2025-10-27T08:24:27.711210925Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:24:27.711955 kubelet[2788]: E1027 08:24:27.711910 2788 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:24:27.712409 kubelet[2788]: E1027 08:24:27.711961 2788 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:24:27.712409 kubelet[2788]: E1027 08:24:27.712068 2788 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-68d8c5c9bc-f56tm_calico-apiserver(1bcea6e5-3c39-41c9-92bc-ee324a63b0a8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:24:27.712409 kubelet[2788]: E1027 08:24:27.712112 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68d8c5c9bc-f56tm" podUID="1bcea6e5-3c39-41c9-92bc-ee324a63b0a8" Oct 27 08:24:27.788636 containerd[1628]: time="2025-10-27T08:24:27.788580558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 27 08:24:28.239626 containerd[1628]: time="2025-10-27T08:24:28.239541658Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:24:28.240897 containerd[1628]: time="2025-10-27T08:24:28.240856621Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 27 08:24:28.241047 containerd[1628]: time="2025-10-27T08:24:28.240939198Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 27 08:24:28.241151 kubelet[2788]: E1027 08:24:28.241104 2788 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:24:28.241234 kubelet[2788]: E1027 08:24:28.241156 2788 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:24:28.241397 kubelet[2788]: E1027 08:24:28.241364 2788 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-wd8vm_calico-system(79766d3c-55af-44b2-853b-a76f9b90d865): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 27 08:24:28.241477 kubelet[2788]: E1027 08:24:28.241409 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wd8vm" podUID="79766d3c-55af-44b2-853b-a76f9b90d865" Oct 27 08:24:28.242162 containerd[1628]: time="2025-10-27T08:24:28.242119545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 27 08:24:28.692670 containerd[1628]: time="2025-10-27T08:24:28.692619078Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:24:28.693783 containerd[1628]: time="2025-10-27T08:24:28.693742005Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 27 08:24:28.693920 containerd[1628]: time="2025-10-27T08:24:28.693762264Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 27 08:24:28.694097 kubelet[2788]: E1027 08:24:28.694043 2788 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:24:28.694378 kubelet[2788]: E1027 08:24:28.694124 2788 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:24:28.694378 kubelet[2788]: E1027 08:24:28.694225 2788 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6b4b456d6b-4jfhq_calico-system(766fb522-b8e8-496d-9871-210f41ee5bf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 27 08:24:28.695611 containerd[1628]: time="2025-10-27T08:24:28.695589020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 27 08:24:29.140078 containerd[1628]: time="2025-10-27T08:24:29.140009967Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:24:29.141323 containerd[1628]: time="2025-10-27T08:24:29.141204287Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 27 08:24:29.141323 containerd[1628]: time="2025-10-27T08:24:29.141280652Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 27 08:24:29.142097 kubelet[2788]: E1027 08:24:29.141581 2788 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:24:29.142097 kubelet[2788]: E1027 08:24:29.141626 2788 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:24:29.142097 kubelet[2788]: E1027 08:24:29.141833 2788 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6b4b456d6b-4jfhq_calico-system(766fb522-b8e8-496d-9871-210f41ee5bf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 27 08:24:29.142218 containerd[1628]: time="2025-10-27T08:24:29.141926410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:24:29.142244 kubelet[2788]: E1027 08:24:29.141937 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b4b456d6b-4jfhq" podUID="766fb522-b8e8-496d-9871-210f41ee5bf3" Oct 27 08:24:29.934399 containerd[1628]: time="2025-10-27T08:24:29.934269076Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:24:29.935718 containerd[1628]: time="2025-10-27T08:24:29.935661123Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:24:29.935841 containerd[1628]: time="2025-10-27T08:24:29.935793475Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:24:29.936083 kubelet[2788]: E1027 08:24:29.936012 2788 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:24:29.936661 kubelet[2788]: E1027 08:24:29.936080 2788 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:24:29.936661 kubelet[2788]: E1027 08:24:29.936289 2788 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-68d8c5c9bc-jsc7m_calico-apiserver(96a60c22-8a13-49d1-8749-b73cb7e464a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:24:29.936661 kubelet[2788]: E1027 08:24:29.936356 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68d8c5c9bc-jsc7m" podUID="96a60c22-8a13-49d1-8749-b73cb7e464a7" Oct 27 08:24:29.938292 containerd[1628]: time="2025-10-27T08:24:29.938223114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 27 08:24:30.372988 containerd[1628]: time="2025-10-27T08:24:30.372822290Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:24:30.374624 containerd[1628]: time="2025-10-27T08:24:30.374510317Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 27 08:24:30.374726 containerd[1628]: time="2025-10-27T08:24:30.374664600Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 27 08:24:30.375062 kubelet[2788]: E1027 08:24:30.374989 2788 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:24:30.375203 kubelet[2788]: E1027 08:24:30.375077 2788 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:24:30.375203 kubelet[2788]: E1027 08:24:30.375178 2788 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-s6rbz_calico-system(1b761e29-b614-4041-93ad-3a2beca6983c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 27 08:24:30.376977 containerd[1628]: time="2025-10-27T08:24:30.376865731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 27 08:24:30.843060 containerd[1628]: time="2025-10-27T08:24:30.843024519Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:24:30.844298 containerd[1628]: time="2025-10-27T08:24:30.844258763Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 27 08:24:30.844375 containerd[1628]: time="2025-10-27T08:24:30.844358544Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 27 08:24:30.844562 kubelet[2788]: E1027 08:24:30.844513 2788 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:24:30.844562 kubelet[2788]: E1027 08:24:30.844558 2788 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:24:30.844668 kubelet[2788]: E1027 08:24:30.844631 2788 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-s6rbz_calico-system(1b761e29-b614-4041-93ad-3a2beca6983c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 27 08:24:30.844758 kubelet[2788]: E1027 08:24:30.844688 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s6rbz" podUID="1b761e29-b614-4041-93ad-3a2beca6983c" Oct 27 08:24:32.397081 systemd[1]: Started sshd@13-46.62.164.160:22-64.227.134.24:41838.service - OpenSSH per-connection server daemon (64.227.134.24:41838). Oct 27 08:24:33.425286 sshd[4813]: Invalid user xs from 64.227.134.24 port 41838 Oct 27 08:24:33.618233 sshd[4813]: Received disconnect from 64.227.134.24 port 41838:11: Bye Bye [preauth] Oct 27 08:24:33.618233 sshd[4813]: Disconnected from invalid user xs 64.227.134.24 port 41838 [preauth] Oct 27 08:24:33.620328 systemd[1]: sshd@13-46.62.164.160:22-64.227.134.24:41838.service: Deactivated successfully. Oct 27 08:24:39.789713 kubelet[2788]: E1027 08:24:39.788553 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68d8c5c9bc-f56tm" podUID="1bcea6e5-3c39-41c9-92bc-ee324a63b0a8" Oct 27 08:24:39.789713 kubelet[2788]: E1027 08:24:39.789213 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wd8vm" podUID="79766d3c-55af-44b2-853b-a76f9b90d865" Oct 27 08:24:40.789691 kubelet[2788]: E1027 08:24:40.788186 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74d68549b8-grhgf" podUID="e5f8aee0-010a-43df-b3cc-29e7716b4073" Oct 27 08:24:41.189430 containerd[1628]: time="2025-10-27T08:24:41.189355981Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1750137e5f1a22acdfcb9b9de649a58767d8e5a42970d60a2b5b3675d6bfc4b5\" id:\"de473b30ea74dae814482643786ad7c17e5252c71135220701c7c44f3da30bc3\" pid:4832 exited_at:{seconds:1761553481 nanos:188260138}" Oct 27 08:24:42.790514 kubelet[2788]: E1027 08:24:42.790445 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b4b456d6b-4jfhq" podUID="766fb522-b8e8-496d-9871-210f41ee5bf3" Oct 27 08:24:42.791116 kubelet[2788]: E1027 08:24:42.790557 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68d8c5c9bc-jsc7m" podUID="96a60c22-8a13-49d1-8749-b73cb7e464a7" Oct 27 08:24:42.791422 kubelet[2788]: E1027 08:24:42.791329 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s6rbz" podUID="1b761e29-b614-4041-93ad-3a2beca6983c" Oct 27 08:24:50.818349 containerd[1628]: time="2025-10-27T08:24:50.817688300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:24:51.261850 containerd[1628]: time="2025-10-27T08:24:51.261699302Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:24:51.263584 containerd[1628]: time="2025-10-27T08:24:51.263416029Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:24:51.263584 containerd[1628]: time="2025-10-27T08:24:51.263528334Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:24:51.263986 kubelet[2788]: E1027 08:24:51.263675 2788 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:24:51.263986 kubelet[2788]: E1027 08:24:51.263740 2788 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:24:51.263986 kubelet[2788]: E1027 08:24:51.263975 2788 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-68d8c5c9bc-f56tm_calico-apiserver(1bcea6e5-3c39-41c9-92bc-ee324a63b0a8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:24:51.263986 kubelet[2788]: E1027 08:24:51.264008 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68d8c5c9bc-f56tm" podUID="1bcea6e5-3c39-41c9-92bc-ee324a63b0a8" Oct 27 08:24:51.265439 containerd[1628]: time="2025-10-27T08:24:51.264485416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 27 08:24:51.903189 containerd[1628]: time="2025-10-27T08:24:51.903107885Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:24:51.904150 containerd[1628]: time="2025-10-27T08:24:51.904080187Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 27 08:24:51.904150 containerd[1628]: time="2025-10-27T08:24:51.904111826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 27 08:24:51.904359 kubelet[2788]: E1027 08:24:51.904313 2788 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:24:51.904425 kubelet[2788]: E1027 08:24:51.904360 2788 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:24:51.904508 kubelet[2788]: E1027 08:24:51.904423 2788 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-wd8vm_calico-system(79766d3c-55af-44b2-853b-a76f9b90d865): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 27 08:24:51.904508 kubelet[2788]: E1027 08:24:51.904464 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wd8vm" podUID="79766d3c-55af-44b2-853b-a76f9b90d865" Oct 27 08:24:53.788113 containerd[1628]: time="2025-10-27T08:24:53.788061478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 27 08:24:54.413723 containerd[1628]: time="2025-10-27T08:24:54.413648334Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:24:54.421380 containerd[1628]: time="2025-10-27T08:24:54.414914027Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 27 08:24:54.422434 containerd[1628]: time="2025-10-27T08:24:54.416525782Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 27 08:24:54.422502 kubelet[2788]: E1027 08:24:54.421762 2788 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:24:54.422502 kubelet[2788]: E1027 08:24:54.421805 2788 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:24:54.422502 kubelet[2788]: E1027 08:24:54.421888 2788 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-s6rbz_calico-system(1b761e29-b614-4041-93ad-3a2beca6983c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 27 08:24:54.423742 containerd[1628]: time="2025-10-27T08:24:54.423692116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 27 08:24:54.878097 containerd[1628]: time="2025-10-27T08:24:54.877923592Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:24:54.880794 containerd[1628]: time="2025-10-27T08:24:54.880734820Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 27 08:24:54.880935 containerd[1628]: time="2025-10-27T08:24:54.880828592Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 27 08:24:54.881036 kubelet[2788]: E1027 08:24:54.880954 2788 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:24:54.881036 kubelet[2788]: E1027 08:24:54.880994 2788 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:24:54.881350 kubelet[2788]: E1027 08:24:54.881224 2788 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-s6rbz_calico-system(1b761e29-b614-4041-93ad-3a2beca6983c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 27 08:24:54.881350 kubelet[2788]: E1027 08:24:54.881266 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s6rbz" podUID="1b761e29-b614-4041-93ad-3a2beca6983c" Oct 27 08:24:54.882622 containerd[1628]: time="2025-10-27T08:24:54.882564680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:24:55.359652 containerd[1628]: time="2025-10-27T08:24:55.359594314Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:24:55.360573 containerd[1628]: time="2025-10-27T08:24:55.360439223Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:24:55.360717 containerd[1628]: time="2025-10-27T08:24:55.360687554Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:24:55.361569 kubelet[2788]: E1027 08:24:55.360902 2788 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:24:55.361569 kubelet[2788]: E1027 08:24:55.360942 2788 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:24:55.361569 kubelet[2788]: E1027 08:24:55.361024 2788 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-68d8c5c9bc-jsc7m_calico-apiserver(96a60c22-8a13-49d1-8749-b73cb7e464a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:24:55.361569 kubelet[2788]: E1027 08:24:55.361055 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68d8c5c9bc-jsc7m" podUID="96a60c22-8a13-49d1-8749-b73cb7e464a7" Oct 27 08:24:55.788585 containerd[1628]: time="2025-10-27T08:24:55.788403581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 27 08:24:56.223940 containerd[1628]: time="2025-10-27T08:24:56.223880983Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:24:56.226367 containerd[1628]: time="2025-10-27T08:24:56.226173747Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 27 08:24:56.226367 containerd[1628]: time="2025-10-27T08:24:56.226254709Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 27 08:24:56.226627 kubelet[2788]: E1027 08:24:56.226589 2788 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:24:56.226885 kubelet[2788]: E1027 08:24:56.226633 2788 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:24:56.226885 kubelet[2788]: E1027 08:24:56.226798 2788 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6b4b456d6b-4jfhq_calico-system(766fb522-b8e8-496d-9871-210f41ee5bf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 27 08:24:56.227294 containerd[1628]: time="2025-10-27T08:24:56.227264697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 27 08:24:56.768547 containerd[1628]: time="2025-10-27T08:24:56.768481631Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:24:56.769587 containerd[1628]: time="2025-10-27T08:24:56.769512614Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 27 08:24:56.769587 containerd[1628]: time="2025-10-27T08:24:56.769548381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 27 08:24:56.769835 kubelet[2788]: E1027 08:24:56.769759 2788 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:24:56.769835 kubelet[2788]: E1027 08:24:56.769810 2788 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:24:56.770124 kubelet[2788]: E1027 08:24:56.770046 2788 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-74d68549b8-grhgf_calico-system(e5f8aee0-010a-43df-b3cc-29e7716b4073): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 27 08:24:56.770124 kubelet[2788]: E1027 08:24:56.770085 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74d68549b8-grhgf" podUID="e5f8aee0-010a-43df-b3cc-29e7716b4073" Oct 27 08:24:56.770652 containerd[1628]: time="2025-10-27T08:24:56.770617461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 27 08:24:57.226664 containerd[1628]: time="2025-10-27T08:24:57.226474535Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:24:57.227709 containerd[1628]: time="2025-10-27T08:24:57.227602454Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 27 08:24:57.227709 containerd[1628]: time="2025-10-27T08:24:57.227677833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 27 08:24:57.227991 kubelet[2788]: E1027 08:24:57.227932 2788 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:24:57.228169 kubelet[2788]: E1027 08:24:57.228000 2788 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:24:57.228169 kubelet[2788]: E1027 08:24:57.228066 2788 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6b4b456d6b-4jfhq_calico-system(766fb522-b8e8-496d-9871-210f41ee5bf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 27 08:24:57.228169 kubelet[2788]: E1027 08:24:57.228100 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b4b456d6b-4jfhq" podUID="766fb522-b8e8-496d-9871-210f41ee5bf3" Oct 27 08:25:02.788971 kubelet[2788]: E1027 08:25:02.788893 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68d8c5c9bc-f56tm" podUID="1bcea6e5-3c39-41c9-92bc-ee324a63b0a8" Oct 27 08:25:04.126609 systemd[1]: Started sshd@14-46.62.164.160:22-41.214.61.216:51816.service - OpenSSH per-connection server daemon (41.214.61.216:51816). Oct 27 08:25:04.790392 kubelet[2788]: E1027 08:25:04.789976 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wd8vm" podUID="79766d3c-55af-44b2-853b-a76f9b90d865" Oct 27 08:25:05.246995 sshd[4856]: Invalid user bash from 41.214.61.216 port 51816 Oct 27 08:25:05.452127 sshd[4856]: Received disconnect from 41.214.61.216 port 51816:11: Bye Bye [preauth] Oct 27 08:25:05.452127 sshd[4856]: Disconnected from invalid user bash 41.214.61.216 port 51816 [preauth] Oct 27 08:25:05.454344 systemd[1]: sshd@14-46.62.164.160:22-41.214.61.216:51816.service: Deactivated successfully. Oct 27 08:25:07.485344 systemd[1]: Started sshd@15-46.62.164.160:22-147.75.109.163:42836.service - OpenSSH per-connection server daemon (147.75.109.163:42836). Oct 27 08:25:07.789758 kubelet[2788]: E1027 08:25:07.789138 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68d8c5c9bc-jsc7m" podUID="96a60c22-8a13-49d1-8749-b73cb7e464a7" Oct 27 08:25:08.515770 sshd[4872]: Accepted publickey for core from 147.75.109.163 port 42836 ssh2: RSA SHA256:VBzT7lRU7iKzR07sl+BRHKYnd7nyLYgikPwEjDMWwKQ Oct 27 08:25:08.522617 sshd-session[4872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:25:08.530414 systemd-logind[1593]: New session 8 of user core. Oct 27 08:25:08.536591 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 27 08:25:08.788547 kubelet[2788]: E1027 08:25:08.788311 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74d68549b8-grhgf" podUID="e5f8aee0-010a-43df-b3cc-29e7716b4073" Oct 27 08:25:09.741515 sshd[4875]: Connection closed by 147.75.109.163 port 42836 Oct 27 08:25:09.743502 sshd-session[4872]: pam_unix(sshd:session): session closed for user core Oct 27 08:25:09.752592 systemd-logind[1593]: Session 8 logged out. Waiting for processes to exit. Oct 27 08:25:09.756113 systemd[1]: sshd@15-46.62.164.160:22-147.75.109.163:42836.service: Deactivated successfully. Oct 27 08:25:09.761067 systemd[1]: session-8.scope: Deactivated successfully. Oct 27 08:25:09.764416 systemd-logind[1593]: Removed session 8. Oct 27 08:25:09.793619 kubelet[2788]: E1027 08:25:09.793534 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s6rbz" podUID="1b761e29-b614-4041-93ad-3a2beca6983c" Oct 27 08:25:11.310480 containerd[1628]: time="2025-10-27T08:25:11.310410024Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1750137e5f1a22acdfcb9b9de649a58767d8e5a42970d60a2b5b3675d6bfc4b5\" id:\"afa4989715aef48b6285ecf628f6d59fce8724f74bba93afaeffcb6b3ec0035c\" pid:4899 exited_at:{seconds:1761553511 nanos:310019145}" Oct 27 08:25:11.787518 kubelet[2788]: E1027 08:25:11.787382 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b4b456d6b-4jfhq" podUID="766fb522-b8e8-496d-9871-210f41ee5bf3" Oct 27 08:25:14.943786 systemd[1]: Started sshd@16-46.62.164.160:22-147.75.109.163:59370.service - OpenSSH per-connection server daemon (147.75.109.163:59370). Oct 27 08:25:15.789250 kubelet[2788]: E1027 08:25:15.789200 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68d8c5c9bc-f56tm" podUID="1bcea6e5-3c39-41c9-92bc-ee324a63b0a8" Oct 27 08:25:16.095045 sshd[4913]: Accepted publickey for core from 147.75.109.163 port 59370 ssh2: RSA SHA256:VBzT7lRU7iKzR07sl+BRHKYnd7nyLYgikPwEjDMWwKQ Oct 27 08:25:16.099224 sshd-session[4913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:25:16.104477 systemd-logind[1593]: New session 9 of user core. Oct 27 08:25:16.119601 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 27 08:25:16.946728 sshd[4917]: Connection closed by 147.75.109.163 port 59370 Oct 27 08:25:16.946799 sshd-session[4913]: pam_unix(sshd:session): session closed for user core Oct 27 08:25:16.952600 systemd[1]: sshd@16-46.62.164.160:22-147.75.109.163:59370.service: Deactivated successfully. Oct 27 08:25:16.955550 systemd[1]: session-9.scope: Deactivated successfully. Oct 27 08:25:16.957582 systemd-logind[1593]: Session 9 logged out. Waiting for processes to exit. Oct 27 08:25:16.958874 systemd-logind[1593]: Removed session 9. Oct 27 08:25:18.787896 kubelet[2788]: E1027 08:25:18.787460 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68d8c5c9bc-jsc7m" podUID="96a60c22-8a13-49d1-8749-b73cb7e464a7" Oct 27 08:25:19.788100 kubelet[2788]: E1027 08:25:19.787721 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74d68549b8-grhgf" podUID="e5f8aee0-010a-43df-b3cc-29e7716b4073" Oct 27 08:25:19.788662 kubelet[2788]: E1027 08:25:19.788588 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wd8vm" podUID="79766d3c-55af-44b2-853b-a76f9b90d865" Oct 27 08:25:21.306698 systemd[1]: Started sshd@17-46.62.164.160:22-177.234.145.2:60216.service - OpenSSH per-connection server daemon (177.234.145.2:60216). Oct 27 08:25:22.136346 systemd[1]: Started sshd@18-46.62.164.160:22-147.75.109.163:47156.service - OpenSSH per-connection server daemon (147.75.109.163:47156). Oct 27 08:25:22.503922 sshd[4933]: Invalid user student from 177.234.145.2 port 60216 Oct 27 08:25:22.725411 sshd[4933]: Received disconnect from 177.234.145.2 port 60216:11: Bye Bye [preauth] Oct 27 08:25:22.725411 sshd[4933]: Disconnected from invalid user student 177.234.145.2 port 60216 [preauth] Oct 27 08:25:22.728364 systemd[1]: sshd@17-46.62.164.160:22-177.234.145.2:60216.service: Deactivated successfully. Oct 27 08:25:22.793720 kubelet[2788]: E1027 08:25:22.793436 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s6rbz" podUID="1b761e29-b614-4041-93ad-3a2beca6983c" Oct 27 08:25:23.249305 sshd[4937]: Accepted publickey for core from 147.75.109.163 port 47156 ssh2: RSA SHA256:VBzT7lRU7iKzR07sl+BRHKYnd7nyLYgikPwEjDMWwKQ Oct 27 08:25:23.251330 sshd-session[4937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:25:23.258212 systemd-logind[1593]: New session 10 of user core. Oct 27 08:25:23.264621 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 27 08:25:24.111749 sshd[4942]: Connection closed by 147.75.109.163 port 47156 Oct 27 08:25:24.113229 sshd-session[4937]: pam_unix(sshd:session): session closed for user core Oct 27 08:25:24.119417 systemd-logind[1593]: Session 10 logged out. Waiting for processes to exit. Oct 27 08:25:24.120372 systemd[1]: sshd@18-46.62.164.160:22-147.75.109.163:47156.service: Deactivated successfully. Oct 27 08:25:24.123895 systemd[1]: session-10.scope: Deactivated successfully. Oct 27 08:25:24.125940 systemd-logind[1593]: Removed session 10. Oct 27 08:25:24.274890 systemd[1]: Started sshd@19-46.62.164.160:22-147.75.109.163:47164.service - OpenSSH per-connection server daemon (147.75.109.163:47164). Oct 27 08:25:25.316251 sshd[4955]: Accepted publickey for core from 147.75.109.163 port 47164 ssh2: RSA SHA256:VBzT7lRU7iKzR07sl+BRHKYnd7nyLYgikPwEjDMWwKQ Oct 27 08:25:25.318408 sshd-session[4955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:25:25.326000 systemd-logind[1593]: New session 11 of user core. Oct 27 08:25:25.331749 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 27 08:25:25.791559 kubelet[2788]: E1027 08:25:25.791233 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b4b456d6b-4jfhq" podUID="766fb522-b8e8-496d-9871-210f41ee5bf3" Oct 27 08:25:26.200089 sshd[4962]: Connection closed by 147.75.109.163 port 47164 Oct 27 08:25:26.200636 sshd-session[4955]: pam_unix(sshd:session): session closed for user core Oct 27 08:25:26.206071 systemd[1]: sshd@19-46.62.164.160:22-147.75.109.163:47164.service: Deactivated successfully. Oct 27 08:25:26.208746 systemd[1]: session-11.scope: Deactivated successfully. Oct 27 08:25:26.210696 systemd-logind[1593]: Session 11 logged out. Waiting for processes to exit. Oct 27 08:25:26.212272 systemd-logind[1593]: Removed session 11. Oct 27 08:25:26.373825 systemd[1]: Started sshd@20-46.62.164.160:22-147.75.109.163:47178.service - OpenSSH per-connection server daemon (147.75.109.163:47178). Oct 27 08:25:27.392520 sshd[4972]: Accepted publickey for core from 147.75.109.163 port 47178 ssh2: RSA SHA256:VBzT7lRU7iKzR07sl+BRHKYnd7nyLYgikPwEjDMWwKQ Oct 27 08:25:27.393782 sshd-session[4972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:25:27.401429 systemd-logind[1593]: New session 12 of user core. Oct 27 08:25:27.409660 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 27 08:25:27.788850 kubelet[2788]: E1027 08:25:27.788721 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68d8c5c9bc-f56tm" podUID="1bcea6e5-3c39-41c9-92bc-ee324a63b0a8" Oct 27 08:25:28.184281 sshd[4981]: Connection closed by 147.75.109.163 port 47178 Oct 27 08:25:28.186230 sshd-session[4972]: pam_unix(sshd:session): session closed for user core Oct 27 08:25:28.191877 systemd[1]: sshd@20-46.62.164.160:22-147.75.109.163:47178.service: Deactivated successfully. Oct 27 08:25:28.196912 systemd[1]: session-12.scope: Deactivated successfully. Oct 27 08:25:28.199921 systemd-logind[1593]: Session 12 logged out. Waiting for processes to exit. Oct 27 08:25:28.201955 systemd-logind[1593]: Removed session 12. Oct 27 08:25:32.788245 kubelet[2788]: E1027 08:25:32.787902 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74d68549b8-grhgf" podUID="e5f8aee0-010a-43df-b3cc-29e7716b4073" Oct 27 08:25:33.362214 systemd[1]: Started sshd@21-46.62.164.160:22-147.75.109.163:40706.service - OpenSSH per-connection server daemon (147.75.109.163:40706). Oct 27 08:25:33.787390 kubelet[2788]: E1027 08:25:33.787158 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68d8c5c9bc-jsc7m" podUID="96a60c22-8a13-49d1-8749-b73cb7e464a7" Oct 27 08:25:33.790505 kubelet[2788]: E1027 08:25:33.790431 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s6rbz" podUID="1b761e29-b614-4041-93ad-3a2beca6983c" Oct 27 08:25:34.395532 sshd[5003]: Accepted publickey for core from 147.75.109.163 port 40706 ssh2: RSA SHA256:VBzT7lRU7iKzR07sl+BRHKYnd7nyLYgikPwEjDMWwKQ Oct 27 08:25:34.397885 sshd-session[5003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:25:34.403142 systemd-logind[1593]: New session 13 of user core. Oct 27 08:25:34.408881 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 27 08:25:34.790103 containerd[1628]: time="2025-10-27T08:25:34.789690575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 27 08:25:35.170069 sshd[5006]: Connection closed by 147.75.109.163 port 40706 Oct 27 08:25:35.170921 sshd-session[5003]: pam_unix(sshd:session): session closed for user core Oct 27 08:25:35.176417 systemd-logind[1593]: Session 13 logged out. Waiting for processes to exit. Oct 27 08:25:35.176518 systemd[1]: sshd@21-46.62.164.160:22-147.75.109.163:40706.service: Deactivated successfully. Oct 27 08:25:35.179844 systemd[1]: session-13.scope: Deactivated successfully. Oct 27 08:25:35.183362 systemd-logind[1593]: Removed session 13. Oct 27 08:25:35.226421 containerd[1628]: time="2025-10-27T08:25:35.226346672Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:25:35.228014 containerd[1628]: time="2025-10-27T08:25:35.227932404Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 27 08:25:35.228565 containerd[1628]: time="2025-10-27T08:25:35.228059194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 27 08:25:35.228659 kubelet[2788]: E1027 08:25:35.228326 2788 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:25:35.228659 kubelet[2788]: E1027 08:25:35.228415 2788 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 27 08:25:35.228659 kubelet[2788]: E1027 08:25:35.228632 2788 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-wd8vm_calico-system(79766d3c-55af-44b2-853b-a76f9b90d865): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 27 08:25:35.229363 kubelet[2788]: E1027 08:25:35.228683 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wd8vm" podUID="79766d3c-55af-44b2-853b-a76f9b90d865" Oct 27 08:25:35.341111 systemd[1]: Started sshd@22-46.62.164.160:22-147.75.109.163:40708.service - OpenSSH per-connection server daemon (147.75.109.163:40708). Oct 27 08:25:36.343272 sshd[5019]: Accepted publickey for core from 147.75.109.163 port 40708 ssh2: RSA SHA256:VBzT7lRU7iKzR07sl+BRHKYnd7nyLYgikPwEjDMWwKQ Oct 27 08:25:36.344298 sshd-session[5019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:25:36.348528 systemd-logind[1593]: New session 14 of user core. Oct 27 08:25:36.356577 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 27 08:25:37.331604 sshd[5024]: Connection closed by 147.75.109.163 port 40708 Oct 27 08:25:37.332714 sshd-session[5019]: pam_unix(sshd:session): session closed for user core Oct 27 08:25:37.339417 systemd-logind[1593]: Session 14 logged out. Waiting for processes to exit. Oct 27 08:25:37.341404 systemd[1]: sshd@22-46.62.164.160:22-147.75.109.163:40708.service: Deactivated successfully. Oct 27 08:25:37.344173 systemd[1]: session-14.scope: Deactivated successfully. Oct 27 08:25:37.347716 systemd-logind[1593]: Removed session 14. Oct 27 08:25:37.507842 systemd[1]: Started sshd@23-46.62.164.160:22-147.75.109.163:40722.service - OpenSSH per-connection server daemon (147.75.109.163:40722). Oct 27 08:25:38.558374 sshd[5034]: Accepted publickey for core from 147.75.109.163 port 40722 ssh2: RSA SHA256:VBzT7lRU7iKzR07sl+BRHKYnd7nyLYgikPwEjDMWwKQ Oct 27 08:25:38.559419 sshd-session[5034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:25:38.565493 systemd-logind[1593]: New session 15 of user core. Oct 27 08:25:38.573666 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 27 08:25:38.802470 containerd[1628]: time="2025-10-27T08:25:38.802292680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:25:39.348528 containerd[1628]: time="2025-10-27T08:25:39.348440592Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:25:39.351171 containerd[1628]: time="2025-10-27T08:25:39.351097042Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:25:39.351424 containerd[1628]: time="2025-10-27T08:25:39.351204121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:25:39.351789 kubelet[2788]: E1027 08:25:39.351657 2788 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:25:39.351789 kubelet[2788]: E1027 08:25:39.351749 2788 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:25:39.353470 kubelet[2788]: E1027 08:25:39.352318 2788 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-68d8c5c9bc-f56tm_calico-apiserver(1bcea6e5-3c39-41c9-92bc-ee324a63b0a8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:25:39.353574 kubelet[2788]: E1027 08:25:39.353554 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68d8c5c9bc-f56tm" podUID="1bcea6e5-3c39-41c9-92bc-ee324a63b0a8" Oct 27 08:25:40.045050 sshd[5037]: Connection closed by 147.75.109.163 port 40722 Oct 27 08:25:40.050687 sshd-session[5034]: pam_unix(sshd:session): session closed for user core Oct 27 08:25:40.059422 systemd[1]: sshd@23-46.62.164.160:22-147.75.109.163:40722.service: Deactivated successfully. Oct 27 08:25:40.062799 systemd[1]: session-15.scope: Deactivated successfully. Oct 27 08:25:40.064428 systemd-logind[1593]: Session 15 logged out. Waiting for processes to exit. Oct 27 08:25:40.066063 systemd-logind[1593]: Removed session 15. Oct 27 08:25:40.226617 systemd[1]: Started sshd@24-46.62.164.160:22-147.75.109.163:40726.service - OpenSSH per-connection server daemon (147.75.109.163:40726). Oct 27 08:25:40.801863 containerd[1628]: time="2025-10-27T08:25:40.801813502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 27 08:25:41.223328 containerd[1628]: time="2025-10-27T08:25:41.223287653Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1750137e5f1a22acdfcb9b9de649a58767d8e5a42970d60a2b5b3675d6bfc4b5\" id:\"e4d9bb5631ed6c946074e5b1c013ea8d4d14cde47c9dd54201b62035a20bd364\" pid:5074 exited_at:{seconds:1761553541 nanos:222954340}" Oct 27 08:25:41.232058 containerd[1628]: time="2025-10-27T08:25:41.231897432Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:25:41.233915 containerd[1628]: time="2025-10-27T08:25:41.233859522Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 27 08:25:41.234094 containerd[1628]: time="2025-10-27T08:25:41.233936352Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 27 08:25:41.234466 kubelet[2788]: E1027 08:25:41.234294 2788 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:25:41.234466 kubelet[2788]: E1027 08:25:41.234337 2788 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 27 08:25:41.234466 kubelet[2788]: E1027 08:25:41.234411 2788 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6b4b456d6b-4jfhq_calico-system(766fb522-b8e8-496d-9871-210f41ee5bf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 27 08:25:41.236342 containerd[1628]: time="2025-10-27T08:25:41.236307702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 27 08:25:41.278193 sshd[5056]: Accepted publickey for core from 147.75.109.163 port 40726 ssh2: RSA SHA256:VBzT7lRU7iKzR07sl+BRHKYnd7nyLYgikPwEjDMWwKQ Oct 27 08:25:41.280582 sshd-session[5056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:25:41.286531 systemd-logind[1593]: New session 16 of user core. Oct 27 08:25:41.289630 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 27 08:25:41.678483 containerd[1628]: time="2025-10-27T08:25:41.678367781Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:25:41.680124 containerd[1628]: time="2025-10-27T08:25:41.679960874Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 27 08:25:41.680374 containerd[1628]: time="2025-10-27T08:25:41.680096871Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 27 08:25:41.680717 kubelet[2788]: E1027 08:25:41.680662 2788 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:25:41.680717 kubelet[2788]: E1027 08:25:41.680716 2788 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 27 08:25:41.680831 kubelet[2788]: E1027 08:25:41.680789 2788 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6b4b456d6b-4jfhq_calico-system(766fb522-b8e8-496d-9871-210f41ee5bf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 27 08:25:41.680858 kubelet[2788]: E1027 08:25:41.680826 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b4b456d6b-4jfhq" podUID="766fb522-b8e8-496d-9871-210f41ee5bf3" Oct 27 08:25:42.333801 sshd[5087]: Connection closed by 147.75.109.163 port 40726 Oct 27 08:25:42.335407 sshd-session[5056]: pam_unix(sshd:session): session closed for user core Oct 27 08:25:42.338800 systemd-logind[1593]: Session 16 logged out. Waiting for processes to exit. Oct 27 08:25:42.340493 systemd[1]: sshd@24-46.62.164.160:22-147.75.109.163:40726.service: Deactivated successfully. Oct 27 08:25:42.343655 systemd[1]: session-16.scope: Deactivated successfully. Oct 27 08:25:42.345943 systemd-logind[1593]: Removed session 16. Oct 27 08:25:42.512655 systemd[1]: Started sshd@25-46.62.164.160:22-147.75.109.163:56558.service - OpenSSH per-connection server daemon (147.75.109.163:56558). Oct 27 08:25:43.522740 sshd[5097]: Accepted publickey for core from 147.75.109.163 port 56558 ssh2: RSA SHA256:VBzT7lRU7iKzR07sl+BRHKYnd7nyLYgikPwEjDMWwKQ Oct 27 08:25:43.524031 sshd-session[5097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:25:43.529147 systemd-logind[1593]: New session 17 of user core. Oct 27 08:25:43.535608 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 27 08:25:44.309753 sshd[5115]: Connection closed by 147.75.109.163 port 56558 Oct 27 08:25:44.309987 sshd-session[5097]: pam_unix(sshd:session): session closed for user core Oct 27 08:25:44.314444 systemd-logind[1593]: Session 17 logged out. Waiting for processes to exit. Oct 27 08:25:44.315047 systemd[1]: sshd@25-46.62.164.160:22-147.75.109.163:56558.service: Deactivated successfully. Oct 27 08:25:44.317758 systemd[1]: session-17.scope: Deactivated successfully. Oct 27 08:25:44.320912 systemd-logind[1593]: Removed session 17. Oct 27 08:25:44.790416 containerd[1628]: time="2025-10-27T08:25:44.790142919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 27 08:25:45.244371 containerd[1628]: time="2025-10-27T08:25:45.244215614Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:25:45.245352 containerd[1628]: time="2025-10-27T08:25:45.245208608Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 27 08:25:45.245352 containerd[1628]: time="2025-10-27T08:25:45.245276898Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 27 08:25:45.245494 kubelet[2788]: E1027 08:25:45.245418 2788 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:25:45.245494 kubelet[2788]: E1027 08:25:45.245488 2788 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 27 08:25:45.246642 kubelet[2788]: E1027 08:25:45.245580 2788 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-s6rbz_calico-system(1b761e29-b614-4041-93ad-3a2beca6983c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 27 08:25:45.247796 containerd[1628]: time="2025-10-27T08:25:45.247745690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 27 08:25:45.666855 systemd[1]: Started sshd@26-46.62.164.160:22-101.126.131.101:53550.service - OpenSSH per-connection server daemon (101.126.131.101:53550). Oct 27 08:25:45.682490 containerd[1628]: time="2025-10-27T08:25:45.681037690Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:25:45.683844 containerd[1628]: time="2025-10-27T08:25:45.683724879Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 27 08:25:45.683844 containerd[1628]: time="2025-10-27T08:25:45.683815357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 27 08:25:45.684427 kubelet[2788]: E1027 08:25:45.684082 2788 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:25:45.684427 kubelet[2788]: E1027 08:25:45.684123 2788 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 27 08:25:45.684427 kubelet[2788]: E1027 08:25:45.684190 2788 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-s6rbz_calico-system(1b761e29-b614-4041-93ad-3a2beca6983c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 27 08:25:45.684601 kubelet[2788]: E1027 08:25:45.684228 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s6rbz" podUID="1b761e29-b614-4041-93ad-3a2beca6983c" Oct 27 08:25:45.788127 containerd[1628]: time="2025-10-27T08:25:45.788050991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 27 08:25:46.227324 containerd[1628]: time="2025-10-27T08:25:46.227270431Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:25:46.228580 containerd[1628]: time="2025-10-27T08:25:46.228435435Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 27 08:25:46.228805 containerd[1628]: time="2025-10-27T08:25:46.228478863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 27 08:25:46.229227 kubelet[2788]: E1027 08:25:46.229159 2788 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:25:46.229227 kubelet[2788]: E1027 08:25:46.229223 2788 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 27 08:25:46.229332 kubelet[2788]: E1027 08:25:46.229299 2788 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-74d68549b8-grhgf_calico-system(e5f8aee0-010a-43df-b3cc-29e7716b4073): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 27 08:25:46.229391 kubelet[2788]: E1027 08:25:46.229333 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74d68549b8-grhgf" podUID="e5f8aee0-010a-43df-b3cc-29e7716b4073" Oct 27 08:25:46.792031 kubelet[2788]: E1027 08:25:46.791669 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wd8vm" podUID="79766d3c-55af-44b2-853b-a76f9b90d865" Oct 27 08:25:46.792444 containerd[1628]: time="2025-10-27T08:25:46.792423021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 27 08:25:48.140775 containerd[1628]: time="2025-10-27T08:25:48.140573123Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 27 08:25:48.145745 containerd[1628]: time="2025-10-27T08:25:48.145610865Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 27 08:25:48.145745 containerd[1628]: time="2025-10-27T08:25:48.145701355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 27 08:25:48.146041 kubelet[2788]: E1027 08:25:48.145939 2788 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:25:48.146041 kubelet[2788]: E1027 08:25:48.145988 2788 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 27 08:25:48.147000 kubelet[2788]: E1027 08:25:48.146064 2788 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-68d8c5c9bc-jsc7m_calico-apiserver(96a60c22-8a13-49d1-8749-b73cb7e464a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 27 08:25:48.147000 kubelet[2788]: E1027 08:25:48.146098 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68d8c5c9bc-jsc7m" podUID="96a60c22-8a13-49d1-8749-b73cb7e464a7" Oct 27 08:25:49.520565 systemd[1]: Started sshd@27-46.62.164.160:22-147.75.109.163:56564.service - OpenSSH per-connection server daemon (147.75.109.163:56564). Oct 27 08:25:49.576422 systemd[1]: Started sshd@28-46.62.164.160:22-131.100.242.102:50204.service - OpenSSH per-connection server daemon (131.100.242.102:50204). Oct 27 08:25:50.638342 sshd[5139]: Accepted publickey for core from 147.75.109.163 port 56564 ssh2: RSA SHA256:VBzT7lRU7iKzR07sl+BRHKYnd7nyLYgikPwEjDMWwKQ Oct 27 08:25:50.641859 sshd-session[5139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:25:50.648815 systemd-logind[1593]: New session 18 of user core. Oct 27 08:25:50.656743 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 27 08:25:50.985387 sshd[5143]: Received disconnect from 131.100.242.102 port 50204:11: Bye Bye [preauth] Oct 27 08:25:50.985387 sshd[5143]: Disconnected from authenticating user root 131.100.242.102 port 50204 [preauth] Oct 27 08:25:50.987667 systemd[1]: sshd@28-46.62.164.160:22-131.100.242.102:50204.service: Deactivated successfully. Oct 27 08:25:51.495283 sshd[5146]: Connection closed by 147.75.109.163 port 56564 Oct 27 08:25:51.498200 sshd-session[5139]: pam_unix(sshd:session): session closed for user core Oct 27 08:25:51.503332 systemd[1]: sshd@27-46.62.164.160:22-147.75.109.163:56564.service: Deactivated successfully. Oct 27 08:25:51.506289 systemd[1]: session-18.scope: Deactivated successfully. Oct 27 08:25:51.508486 systemd-logind[1593]: Session 18 logged out. Waiting for processes to exit. Oct 27 08:25:51.510303 systemd-logind[1593]: Removed session 18. Oct 27 08:25:51.787092 kubelet[2788]: E1027 08:25:51.786961 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68d8c5c9bc-f56tm" podUID="1bcea6e5-3c39-41c9-92bc-ee324a63b0a8" Oct 27 08:25:56.650189 systemd[1]: Started sshd@29-46.62.164.160:22-147.75.109.163:34606.service - OpenSSH per-connection server daemon (147.75.109.163:34606). Oct 27 08:25:56.793971 kubelet[2788]: E1027 08:25:56.793432 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b4b456d6b-4jfhq" podUID="766fb522-b8e8-496d-9871-210f41ee5bf3" Oct 27 08:25:57.686007 sshd[5160]: Accepted publickey for core from 147.75.109.163 port 34606 ssh2: RSA SHA256:VBzT7lRU7iKzR07sl+BRHKYnd7nyLYgikPwEjDMWwKQ Oct 27 08:25:57.687578 sshd-session[5160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 08:25:57.696905 systemd-logind[1593]: New session 19 of user core. Oct 27 08:25:57.702610 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 27 08:25:57.787267 kubelet[2788]: E1027 08:25:57.786913 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wd8vm" podUID="79766d3c-55af-44b2-853b-a76f9b90d865" Oct 27 08:25:58.458600 sshd[5163]: Connection closed by 147.75.109.163 port 34606 Oct 27 08:25:58.460738 sshd-session[5160]: pam_unix(sshd:session): session closed for user core Oct 27 08:25:58.467345 systemd-logind[1593]: Session 19 logged out. Waiting for processes to exit. Oct 27 08:25:58.468054 systemd[1]: sshd@29-46.62.164.160:22-147.75.109.163:34606.service: Deactivated successfully. Oct 27 08:25:58.472524 systemd[1]: session-19.scope: Deactivated successfully. Oct 27 08:25:58.474933 systemd-logind[1593]: Removed session 19. Oct 27 08:25:59.787734 kubelet[2788]: E1027 08:25:59.787436 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74d68549b8-grhgf" podUID="e5f8aee0-010a-43df-b3cc-29e7716b4073" Oct 27 08:25:59.788835 kubelet[2788]: E1027 08:25:59.788797 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s6rbz" podUID="1b761e29-b614-4041-93ad-3a2beca6983c" Oct 27 08:26:02.791090 kubelet[2788]: E1027 08:26:02.791035 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68d8c5c9bc-jsc7m" podUID="96a60c22-8a13-49d1-8749-b73cb7e464a7" Oct 27 08:26:04.790198 kubelet[2788]: E1027 08:26:04.790127 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68d8c5c9bc-f56tm" podUID="1bcea6e5-3c39-41c9-92bc-ee324a63b0a8" Oct 27 08:26:04.845731 systemd[1]: Started sshd@30-46.62.164.160:22-103.172.236.249:34166.service - OpenSSH per-connection server daemon (103.172.236.249:34166). Oct 27 08:26:06.182906 sshd[5177]: Received disconnect from 103.172.236.249 port 34166:11: Bye Bye [preauth] Oct 27 08:26:06.182906 sshd[5177]: Disconnected from authenticating user root 103.172.236.249 port 34166 [preauth] Oct 27 08:26:06.185320 systemd[1]: sshd@30-46.62.164.160:22-103.172.236.249:34166.service: Deactivated successfully. Oct 27 08:26:10.787187 kubelet[2788]: E1027 08:26:10.787064 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wd8vm" podUID="79766d3c-55af-44b2-853b-a76f9b90d865" Oct 27 08:26:10.788027 kubelet[2788]: E1027 08:26:10.787618 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b4b456d6b-4jfhq" podUID="766fb522-b8e8-496d-9871-210f41ee5bf3" Oct 27 08:26:11.165372 containerd[1628]: time="2025-10-27T08:26:11.165309731Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1750137e5f1a22acdfcb9b9de649a58767d8e5a42970d60a2b5b3675d6bfc4b5\" id:\"bc39081f28465e6c091d5b691ee8b1a899e7dbd3c281a32fe09677d81438794e\" pid:5195 exited_at:{seconds:1761553571 nanos:164960424}" Oct 27 08:26:11.788001 kubelet[2788]: E1027 08:26:11.787761 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s6rbz" podUID="1b761e29-b614-4041-93ad-3a2beca6983c" Oct 27 08:26:11.788001 kubelet[2788]: E1027 08:26:11.787885 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74d68549b8-grhgf" podUID="e5f8aee0-010a-43df-b3cc-29e7716b4073" Oct 27 08:26:14.719882 systemd[1]: cri-containerd-775d8600a4af87dc125a24d48648834ca7ea10a1e0d8e75eafbbae67ca5a5ffd.scope: Deactivated successfully. Oct 27 08:26:14.720181 systemd[1]: cri-containerd-775d8600a4af87dc125a24d48648834ca7ea10a1e0d8e75eafbbae67ca5a5ffd.scope: Consumed 2.928s CPU time, 90M memory peak, 55.2M read from disk. Oct 27 08:26:14.762833 containerd[1628]: time="2025-10-27T08:26:14.762777156Z" level=info msg="TaskExit event in podsandbox handler container_id:\"775d8600a4af87dc125a24d48648834ca7ea10a1e0d8e75eafbbae67ca5a5ffd\" id:\"775d8600a4af87dc125a24d48648834ca7ea10a1e0d8e75eafbbae67ca5a5ffd\" pid:2621 exit_status:1 exited_at:{seconds:1761553574 nanos:762263124}" Oct 27 08:26:14.764413 containerd[1628]: time="2025-10-27T08:26:14.763182159Z" level=info msg="received exit event container_id:\"775d8600a4af87dc125a24d48648834ca7ea10a1e0d8e75eafbbae67ca5a5ffd\" id:\"775d8600a4af87dc125a24d48648834ca7ea10a1e0d8e75eafbbae67ca5a5ffd\" pid:2621 exit_status:1 exited_at:{seconds:1761553574 nanos:762263124}" Oct 27 08:26:14.883624 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-775d8600a4af87dc125a24d48648834ca7ea10a1e0d8e75eafbbae67ca5a5ffd-rootfs.mount: Deactivated successfully. Oct 27 08:26:15.057232 systemd[1]: cri-containerd-cbf69d5867f4d976d0ee5b0014f7e9bce4543c19031470e680fbd5d76a0da5b6.scope: Deactivated successfully. Oct 27 08:26:15.057732 systemd[1]: cri-containerd-cbf69d5867f4d976d0ee5b0014f7e9bce4543c19031470e680fbd5d76a0da5b6.scope: Consumed 17.421s CPU time, 134.9M memory peak, 43.8M read from disk. Oct 27 08:26:15.063507 containerd[1628]: time="2025-10-27T08:26:15.063241229Z" level=info msg="received exit event container_id:\"cbf69d5867f4d976d0ee5b0014f7e9bce4543c19031470e680fbd5d76a0da5b6\" id:\"cbf69d5867f4d976d0ee5b0014f7e9bce4543c19031470e680fbd5d76a0da5b6\" pid:3141 exit_status:1 exited_at:{seconds:1761553575 nanos:62720937}" Oct 27 08:26:15.084830 containerd[1628]: time="2025-10-27T08:26:15.084768502Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cbf69d5867f4d976d0ee5b0014f7e9bce4543c19031470e680fbd5d76a0da5b6\" id:\"cbf69d5867f4d976d0ee5b0014f7e9bce4543c19031470e680fbd5d76a0da5b6\" pid:3141 exit_status:1 exited_at:{seconds:1761553575 nanos:62720937}" Oct 27 08:26:15.101937 kubelet[2788]: E1027 08:26:15.101388 2788 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:37374->10.0.0.2:2379: read: connection timed out" Oct 27 08:26:15.120122 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbf69d5867f4d976d0ee5b0014f7e9bce4543c19031470e680fbd5d76a0da5b6-rootfs.mount: Deactivated successfully. Oct 27 08:26:15.543043 kubelet[2788]: I1027 08:26:15.542979 2788 scope.go:117] "RemoveContainer" containerID="cbf69d5867f4d976d0ee5b0014f7e9bce4543c19031470e680fbd5d76a0da5b6" Oct 27 08:26:15.543313 kubelet[2788]: I1027 08:26:15.543122 2788 scope.go:117] "RemoveContainer" containerID="775d8600a4af87dc125a24d48648834ca7ea10a1e0d8e75eafbbae67ca5a5ffd" Oct 27 08:26:15.577950 containerd[1628]: time="2025-10-27T08:26:15.577358983Z" level=info msg="CreateContainer within sandbox \"e8e82b981cb185b0817d3a7183caaf694e5ff20a9652b2143f2e4fce86ecd211\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Oct 27 08:26:15.590687 containerd[1628]: time="2025-10-27T08:26:15.590629531Z" level=info msg="CreateContainer within sandbox \"4805f5faa3b5d1650050df77bb07837020083b40680ddbdacc9f7482c2886694\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Oct 27 08:26:15.703185 containerd[1628]: time="2025-10-27T08:26:15.703133809Z" level=info msg="Container fee26eca58db004ccfffc4ac8cfe0ba6aaf6e45a85f3d4519d382bddac58bc91: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:26:15.721387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2740573467.mount: Deactivated successfully. Oct 27 08:26:15.724272 containerd[1628]: time="2025-10-27T08:26:15.722838181Z" level=info msg="Container efc324848fcf22786e3bb59a430c551a19626fb055a48050fd9ff007795215b6: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:26:15.745935 containerd[1628]: time="2025-10-27T08:26:15.745902825Z" level=info msg="CreateContainer within sandbox \"4805f5faa3b5d1650050df77bb07837020083b40680ddbdacc9f7482c2886694\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"fee26eca58db004ccfffc4ac8cfe0ba6aaf6e45a85f3d4519d382bddac58bc91\"" Oct 27 08:26:15.746727 containerd[1628]: time="2025-10-27T08:26:15.746708106Z" level=info msg="StartContainer for \"fee26eca58db004ccfffc4ac8cfe0ba6aaf6e45a85f3d4519d382bddac58bc91\"" Oct 27 08:26:15.747569 containerd[1628]: time="2025-10-27T08:26:15.747543507Z" level=info msg="connecting to shim fee26eca58db004ccfffc4ac8cfe0ba6aaf6e45a85f3d4519d382bddac58bc91" address="unix:///run/containerd/s/203ea2fa2d4c1ed4d7b17754baf76edff388cabb5e513aacaf79691ed1326419" protocol=ttrpc version=3 Oct 27 08:26:15.754295 containerd[1628]: time="2025-10-27T08:26:15.754242764Z" level=info msg="CreateContainer within sandbox \"e8e82b981cb185b0817d3a7183caaf694e5ff20a9652b2143f2e4fce86ecd211\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"efc324848fcf22786e3bb59a430c551a19626fb055a48050fd9ff007795215b6\"" Oct 27 08:26:15.763094 containerd[1628]: time="2025-10-27T08:26:15.763032458Z" level=info msg="StartContainer for \"efc324848fcf22786e3bb59a430c551a19626fb055a48050fd9ff007795215b6\"" Oct 27 08:26:15.772723 containerd[1628]: time="2025-10-27T08:26:15.772671522Z" level=info msg="connecting to shim efc324848fcf22786e3bb59a430c551a19626fb055a48050fd9ff007795215b6" address="unix:///run/containerd/s/c0d53ad9060f82447263bcd15514d9543f802979296e7589f4964e87ef6b898d" protocol=ttrpc version=3 Oct 27 08:26:15.781621 systemd[1]: Started cri-containerd-fee26eca58db004ccfffc4ac8cfe0ba6aaf6e45a85f3d4519d382bddac58bc91.scope - libcontainer container fee26eca58db004ccfffc4ac8cfe0ba6aaf6e45a85f3d4519d382bddac58bc91. Oct 27 08:26:15.787091 kubelet[2788]: E1027 08:26:15.787024 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68d8c5c9bc-jsc7m" podUID="96a60c22-8a13-49d1-8749-b73cb7e464a7" Oct 27 08:26:15.810886 systemd[1]: Started cri-containerd-efc324848fcf22786e3bb59a430c551a19626fb055a48050fd9ff007795215b6.scope - libcontainer container efc324848fcf22786e3bb59a430c551a19626fb055a48050fd9ff007795215b6. Oct 27 08:26:15.847717 containerd[1628]: time="2025-10-27T08:26:15.847689745Z" level=info msg="StartContainer for \"fee26eca58db004ccfffc4ac8cfe0ba6aaf6e45a85f3d4519d382bddac58bc91\" returns successfully" Oct 27 08:26:15.874993 containerd[1628]: time="2025-10-27T08:26:15.874626612Z" level=info msg="StartContainer for \"efc324848fcf22786e3bb59a430c551a19626fb055a48050fd9ff007795215b6\" returns successfully" Oct 27 08:26:17.787874 kubelet[2788]: E1027 08:26:17.787748 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68d8c5c9bc-f56tm" podUID="1bcea6e5-3c39-41c9-92bc-ee324a63b0a8" Oct 27 08:26:19.558429 systemd[1]: Started sshd@31-46.62.164.160:22-64.227.134.24:51788.service - OpenSSH per-connection server daemon (64.227.134.24:51788). Oct 27 08:26:19.782788 systemd[1]: cri-containerd-575ddd8d2d2647a53f10980be36100450a13a0e7909913ad36dd899650f9c7a0.scope: Deactivated successfully. Oct 27 08:26:19.783062 systemd[1]: cri-containerd-575ddd8d2d2647a53f10980be36100450a13a0e7909913ad36dd899650f9c7a0.scope: Consumed 2.320s CPU time, 37.2M memory peak, 29.8M read from disk. Oct 27 08:26:19.787583 containerd[1628]: time="2025-10-27T08:26:19.787531948Z" level=info msg="TaskExit event in podsandbox handler container_id:\"575ddd8d2d2647a53f10980be36100450a13a0e7909913ad36dd899650f9c7a0\" id:\"575ddd8d2d2647a53f10980be36100450a13a0e7909913ad36dd899650f9c7a0\" pid:2614 exit_status:1 exited_at:{seconds:1761553579 nanos:786875440}" Oct 27 08:26:19.788059 containerd[1628]: time="2025-10-27T08:26:19.788016701Z" level=info msg="received exit event container_id:\"575ddd8d2d2647a53f10980be36100450a13a0e7909913ad36dd899650f9c7a0\" id:\"575ddd8d2d2647a53f10980be36100450a13a0e7909913ad36dd899650f9c7a0\" pid:2614 exit_status:1 exited_at:{seconds:1761553579 nanos:786875440}" Oct 27 08:26:19.811663 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-575ddd8d2d2647a53f10980be36100450a13a0e7909913ad36dd899650f9c7a0-rootfs.mount: Deactivated successfully. Oct 27 08:26:20.491668 sshd[5300]: Invalid user lz from 64.227.134.24 port 51788 Oct 27 08:26:20.585652 kubelet[2788]: I1027 08:26:20.585602 2788 scope.go:117] "RemoveContainer" containerID="575ddd8d2d2647a53f10980be36100450a13a0e7909913ad36dd899650f9c7a0" Oct 27 08:26:20.588981 containerd[1628]: time="2025-10-27T08:26:20.588903082Z" level=info msg="CreateContainer within sandbox \"e5d3e86a8cbce9e7d4cea5c07bdea8f4895eb7dedddedc225eb02673bc808af4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Oct 27 08:26:20.606236 containerd[1628]: time="2025-10-27T08:26:20.604401494Z" level=info msg="Container 7d26c54f8c057de4d3aa432f0ea753166bba42d8886f848128560db73950f278: CDI devices from CRI Config.CDIDevices: []" Oct 27 08:26:20.616644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3488863378.mount: Deactivated successfully. Oct 27 08:26:20.621519 containerd[1628]: time="2025-10-27T08:26:20.621444180Z" level=info msg="CreateContainer within sandbox \"e5d3e86a8cbce9e7d4cea5c07bdea8f4895eb7dedddedc225eb02673bc808af4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"7d26c54f8c057de4d3aa432f0ea753166bba42d8886f848128560db73950f278\"" Oct 27 08:26:20.622161 containerd[1628]: time="2025-10-27T08:26:20.622113066Z" level=info msg="StartContainer for \"7d26c54f8c057de4d3aa432f0ea753166bba42d8886f848128560db73950f278\"" Oct 27 08:26:20.623941 containerd[1628]: time="2025-10-27T08:26:20.623906311Z" level=info msg="connecting to shim 7d26c54f8c057de4d3aa432f0ea753166bba42d8886f848128560db73950f278" address="unix:///run/containerd/s/3d62191636e3eac6c0a414c07b913ac882cedfacb75ebb6f01af3a48480b41c8" protocol=ttrpc version=3 Oct 27 08:26:20.657713 systemd[1]: Started cri-containerd-7d26c54f8c057de4d3aa432f0ea753166bba42d8886f848128560db73950f278.scope - libcontainer container 7d26c54f8c057de4d3aa432f0ea753166bba42d8886f848128560db73950f278. Oct 27 08:26:20.664757 sshd[5300]: Received disconnect from 64.227.134.24 port 51788:11: Bye Bye [preauth] Oct 27 08:26:20.664757 sshd[5300]: Disconnected from invalid user lz 64.227.134.24 port 51788 [preauth] Oct 27 08:26:20.668403 systemd[1]: sshd@31-46.62.164.160:22-64.227.134.24:51788.service: Deactivated successfully. Oct 27 08:26:20.752903 containerd[1628]: time="2025-10-27T08:26:20.752662974Z" level=info msg="StartContainer for \"7d26c54f8c057de4d3aa432f0ea753166bba42d8886f848128560db73950f278\" returns successfully" Oct 27 08:26:21.446818 systemd[1]: sshd@11-46.62.164.160:22-14.103.173.90:52560.service: Deactivated successfully. Oct 27 08:26:22.788323 kubelet[2788]: E1027 08:26:22.788161 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74d68549b8-grhgf" podUID="e5f8aee0-010a-43df-b3cc-29e7716b4073" Oct 27 08:26:23.786910 kubelet[2788]: E1027 08:26:23.786839 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wd8vm" podUID="79766d3c-55af-44b2-853b-a76f9b90d865" Oct 27 08:26:24.787380 kubelet[2788]: E1027 08:26:24.787312 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b4b456d6b-4jfhq" podUID="766fb522-b8e8-496d-9871-210f41ee5bf3" Oct 27 08:26:25.110189 kubelet[2788]: E1027 08:26:25.109162 2788 controller.go:195] "Failed to update lease" err="Put \"https://46.62.164.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-9999-9-9-k-f136f833c6?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 27 08:26:25.786884 kubelet[2788]: E1027 08:26:25.786798 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-s6rbz" podUID="1b761e29-b614-4041-93ad-3a2beca6983c" Oct 27 08:26:26.786795 kubelet[2788]: E1027 08:26:26.786695 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68d8c5c9bc-jsc7m" podUID="96a60c22-8a13-49d1-8749-b73cb7e464a7" Oct 27 08:26:27.268116 systemd[1]: cri-containerd-fee26eca58db004ccfffc4ac8cfe0ba6aaf6e45a85f3d4519d382bddac58bc91.scope: Deactivated successfully. Oct 27 08:26:27.268751 systemd[1]: cri-containerd-fee26eca58db004ccfffc4ac8cfe0ba6aaf6e45a85f3d4519d382bddac58bc91.scope: Consumed 267ms CPU time, 72.8M memory peak, 30.4M read from disk. Oct 27 08:26:27.270122 containerd[1628]: time="2025-10-27T08:26:27.270074047Z" level=info msg="received exit event container_id:\"fee26eca58db004ccfffc4ac8cfe0ba6aaf6e45a85f3d4519d382bddac58bc91\" id:\"fee26eca58db004ccfffc4ac8cfe0ba6aaf6e45a85f3d4519d382bddac58bc91\" pid:5257 exit_status:1 exited_at:{seconds:1761553587 nanos:269538804}" Oct 27 08:26:27.270736 containerd[1628]: time="2025-10-27T08:26:27.270405252Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fee26eca58db004ccfffc4ac8cfe0ba6aaf6e45a85f3d4519d382bddac58bc91\" id:\"fee26eca58db004ccfffc4ac8cfe0ba6aaf6e45a85f3d4519d382bddac58bc91\" pid:5257 exit_status:1 exited_at:{seconds:1761553587 nanos:269538804}" Oct 27 08:26:27.294158 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fee26eca58db004ccfffc4ac8cfe0ba6aaf6e45a85f3d4519d382bddac58bc91-rootfs.mount: Deactivated successfully. Oct 27 08:26:27.611562 kubelet[2788]: I1027 08:26:27.611512 2788 scope.go:117] "RemoveContainer" containerID="cbf69d5867f4d976d0ee5b0014f7e9bce4543c19031470e680fbd5d76a0da5b6" Oct 27 08:26:27.611562 kubelet[2788]: I1027 08:26:27.611592 2788 scope.go:117] "RemoveContainer" containerID="fee26eca58db004ccfffc4ac8cfe0ba6aaf6e45a85f3d4519d382bddac58bc91" Oct 27 08:26:27.611562 kubelet[2788]: E1027 08:26:27.611777 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-65cdcdfd6d-qg2hz_tigera-operator(c2430e51-b5ad-4e47-8fac-aa1a5b9f7219)\"" pod="tigera-operator/tigera-operator-65cdcdfd6d-qg2hz" podUID="c2430e51-b5ad-4e47-8fac-aa1a5b9f7219" Oct 27 08:26:27.616761 containerd[1628]: time="2025-10-27T08:26:27.616724787Z" level=info msg="RemoveContainer for \"cbf69d5867f4d976d0ee5b0014f7e9bce4543c19031470e680fbd5d76a0da5b6\"" Oct 27 08:26:27.663902 containerd[1628]: time="2025-10-27T08:26:27.663832281Z" level=info msg="RemoveContainer for \"cbf69d5867f4d976d0ee5b0014f7e9bce4543c19031470e680fbd5d76a0da5b6\" returns successfully" Oct 27 08:26:31.786795 kubelet[2788]: E1027 08:26:31.786687 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68d8c5c9bc-f56tm" podUID="1bcea6e5-3c39-41c9-92bc-ee324a63b0a8" Oct 27 08:26:34.786679 kubelet[2788]: E1027 08:26:34.786432 2788 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74d68549b8-grhgf" podUID="e5f8aee0-010a-43df-b3cc-29e7716b4073" Oct 27 08:26:35.119499 kubelet[2788]: E1027 08:26:35.119433 2788 request.go:1196] "Unexpected error when reading response body" err="context deadline exceeded (Client.Timeout or context cancellation while reading body)" Oct 27 08:26:35.120794 kubelet[2788]: E1027 08:26:35.120734 2788 controller.go:195] "Failed to update lease" err="unexpected error when reading response body. Please retry. Original error: context deadline exceeded (Client.Timeout or context cancellation while reading body)"