Jan 17 00:01:35.887274 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 17 00:01:35.887300 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 16 22:28:08 -00 2026 Jan 17 00:01:35.887311 kernel: KASLR enabled Jan 17 00:01:35.887317 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 17 00:01:35.887323 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Jan 17 00:01:35.887329 kernel: random: crng init done Jan 17 00:01:35.887337 kernel: ACPI: Early table checksum verification disabled Jan 17 00:01:35.887343 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 17 00:01:35.887350 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 17 00:01:35.887358 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:01:35.887365 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:01:35.887371 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:01:35.887378 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:01:35.887384 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:01:35.887392 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:01:35.887400 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:01:35.887407 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:01:35.887414 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 00:01:35.887420 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 17 00:01:35.887427 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 17 00:01:35.887434 kernel: NUMA: Failed to initialise from firmware Jan 17 00:01:35.887441 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 17 00:01:35.887448 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Jan 17 00:01:35.887455 kernel: Zone ranges: Jan 17 00:01:35.887461 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 17 00:01:35.887470 kernel: DMA32 empty Jan 17 00:01:35.887477 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 17 00:01:35.887483 kernel: Movable zone start for each node Jan 17 00:01:35.887490 kernel: Early memory node ranges Jan 17 00:01:35.887497 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Jan 17 00:01:35.887504 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 17 00:01:35.888544 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 17 00:01:35.888557 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 17 00:01:35.888564 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 17 00:01:35.888571 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 17 00:01:35.888578 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 17 00:01:35.888585 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 17 00:01:35.888597 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 17 00:01:35.888604 kernel: psci: probing for conduit method from ACPI. Jan 17 00:01:35.888623 kernel: psci: PSCIv1.1 detected in firmware. Jan 17 00:01:35.888635 kernel: psci: Using standard PSCI v0.2 function IDs Jan 17 00:01:35.888642 kernel: psci: Trusted OS migration not required Jan 17 00:01:35.888650 kernel: psci: SMC Calling Convention v1.1 Jan 17 00:01:35.888659 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 17 00:01:35.888666 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 17 00:01:35.888673 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 17 00:01:35.888681 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 17 00:01:35.888688 kernel: Detected PIPT I-cache on CPU0 Jan 17 00:01:35.888695 kernel: CPU features: detected: GIC system register CPU interface Jan 17 00:01:35.888703 kernel: CPU features: detected: Hardware dirty bit management Jan 17 00:01:35.888710 kernel: CPU features: detected: Spectre-v4 Jan 17 00:01:35.888718 kernel: CPU features: detected: Spectre-BHB Jan 17 00:01:35.888725 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 17 00:01:35.888734 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 17 00:01:35.888742 kernel: CPU features: detected: ARM erratum 1418040 Jan 17 00:01:35.888749 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 17 00:01:35.888757 kernel: alternatives: applying boot alternatives Jan 17 00:01:35.888765 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 17 00:01:35.888773 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:01:35.888780 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:01:35.888788 kernel: Fallback order for Node 0: 0 Jan 17 00:01:35.888795 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jan 17 00:01:35.888802 kernel: Policy zone: Normal Jan 17 00:01:35.888810 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:01:35.888818 kernel: software IO TLB: area num 2. Jan 17 00:01:35.888826 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jan 17 00:01:35.888833 kernel: Memory: 3882816K/4096000K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 213184K reserved, 0K cma-reserved) Jan 17 00:01:35.888841 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:01:35.888848 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:01:35.888856 kernel: rcu: RCU event tracing is enabled. Jan 17 00:01:35.888863 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:01:35.888871 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:01:35.888878 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:01:35.888886 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:01:35.888893 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:01:35.888900 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 17 00:01:35.888909 kernel: GICv3: 256 SPIs implemented Jan 17 00:01:35.888917 kernel: GICv3: 0 Extended SPIs implemented Jan 17 00:01:35.888924 kernel: Root IRQ handler: gic_handle_irq Jan 17 00:01:35.888931 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 17 00:01:35.888938 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 17 00:01:35.888945 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 17 00:01:35.888953 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jan 17 00:01:35.888960 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jan 17 00:01:35.888968 kernel: GICv3: using LPI property table @0x00000001000e0000 Jan 17 00:01:35.888975 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jan 17 00:01:35.888983 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:01:35.888992 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 00:01:35.888999 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 17 00:01:35.889006 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 17 00:01:35.889014 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 17 00:01:35.889021 kernel: Console: colour dummy device 80x25 Jan 17 00:01:35.889029 kernel: ACPI: Core revision 20230628 Jan 17 00:01:35.889037 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 17 00:01:35.889045 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:01:35.889052 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:01:35.889060 kernel: landlock: Up and running. Jan 17 00:01:35.889069 kernel: SELinux: Initializing. Jan 17 00:01:35.889076 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:01:35.889084 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:01:35.889092 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:01:35.889099 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:01:35.889107 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:01:35.889114 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:01:35.889122 kernel: Platform MSI: ITS@0x8080000 domain created Jan 17 00:01:35.889129 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 17 00:01:35.889138 kernel: Remapping and enabling EFI services. Jan 17 00:01:35.889146 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:01:35.889153 kernel: Detected PIPT I-cache on CPU1 Jan 17 00:01:35.889161 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 17 00:01:35.889169 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jan 17 00:01:35.889177 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 00:01:35.889184 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 17 00:01:35.889191 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:01:35.889199 kernel: SMP: Total of 2 processors activated. Jan 17 00:01:35.889208 kernel: CPU features: detected: 32-bit EL0 Support Jan 17 00:01:35.889215 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 17 00:01:35.889223 kernel: CPU features: detected: Common not Private translations Jan 17 00:01:35.889237 kernel: CPU features: detected: CRC32 instructions Jan 17 00:01:35.889246 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 17 00:01:35.889254 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 17 00:01:35.889262 kernel: CPU features: detected: LSE atomic instructions Jan 17 00:01:35.889270 kernel: CPU features: detected: Privileged Access Never Jan 17 00:01:35.889278 kernel: CPU features: detected: RAS Extension Support Jan 17 00:01:35.889287 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 17 00:01:35.889296 kernel: CPU: All CPU(s) started at EL1 Jan 17 00:01:35.889303 kernel: alternatives: applying system-wide alternatives Jan 17 00:01:35.889311 kernel: devtmpfs: initialized Jan 17 00:01:35.889319 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:01:35.889327 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:01:35.889335 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:01:35.889343 kernel: SMBIOS 3.0.0 present. Jan 17 00:01:35.889352 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 17 00:01:35.889360 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:01:35.889368 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 17 00:01:35.889376 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 17 00:01:35.889384 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 17 00:01:35.889392 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:01:35.889400 kernel: audit: type=2000 audit(0.014:1): state=initialized audit_enabled=0 res=1 Jan 17 00:01:35.889408 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:01:35.889416 kernel: cpuidle: using governor menu Jan 17 00:01:35.889426 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 17 00:01:35.889434 kernel: ASID allocator initialised with 32768 entries Jan 17 00:01:35.889442 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:01:35.889450 kernel: Serial: AMBA PL011 UART driver Jan 17 00:01:35.889458 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 17 00:01:35.889465 kernel: Modules: 0 pages in range for non-PLT usage Jan 17 00:01:35.889473 kernel: Modules: 509008 pages in range for PLT usage Jan 17 00:01:35.889481 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:01:35.889489 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:01:35.889499 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 17 00:01:35.889515 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 17 00:01:35.889524 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:01:35.889532 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:01:35.889540 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 17 00:01:35.889548 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 17 00:01:35.889556 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:01:35.889563 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:01:35.889571 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:01:35.889582 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:01:35.889590 kernel: ACPI: Interpreter enabled Jan 17 00:01:35.889598 kernel: ACPI: Using GIC for interrupt routing Jan 17 00:01:35.889606 kernel: ACPI: MCFG table detected, 1 entries Jan 17 00:01:35.889626 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 17 00:01:35.889634 kernel: printk: console [ttyAMA0] enabled Jan 17 00:01:35.889643 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 00:01:35.889828 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:01:35.889916 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 17 00:01:35.889987 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 17 00:01:35.890056 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 17 00:01:35.890125 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 17 00:01:35.890135 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 17 00:01:35.890143 kernel: PCI host bridge to bus 0000:00 Jan 17 00:01:35.890219 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 17 00:01:35.890287 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 17 00:01:35.890350 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 17 00:01:35.890411 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 00:01:35.890498 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 17 00:01:35.892368 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jan 17 00:01:35.892450 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jan 17 00:01:35.892540 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jan 17 00:01:35.892673 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 17 00:01:35.892746 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jan 17 00:01:35.892827 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 17 00:01:35.892897 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jan 17 00:01:35.892975 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 17 00:01:35.893043 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jan 17 00:01:35.893124 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 17 00:01:35.893194 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jan 17 00:01:35.893270 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 17 00:01:35.893340 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jan 17 00:01:35.893428 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 17 00:01:35.893499 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jan 17 00:01:35.894437 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 17 00:01:35.894579 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jan 17 00:01:35.894694 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 17 00:01:35.894771 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jan 17 00:01:35.894849 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 17 00:01:35.894919 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jan 17 00:01:35.895004 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jan 17 00:01:35.895075 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Jan 17 00:01:35.895157 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 17 00:01:35.895229 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jan 17 00:01:35.895300 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 17 00:01:35.895371 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 17 00:01:35.895449 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 17 00:01:35.895586 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jan 17 00:01:35.895722 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 17 00:01:35.895800 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jan 17 00:01:35.895870 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jan 17 00:01:35.895950 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 17 00:01:35.896023 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jan 17 00:01:35.896109 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 17 00:01:35.896183 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Jan 17 00:01:35.896254 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jan 17 00:01:35.896338 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 17 00:01:35.896411 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jan 17 00:01:35.896484 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jan 17 00:01:35.896650 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 17 00:01:35.896728 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jan 17 00:01:35.896802 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jan 17 00:01:35.896872 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 17 00:01:35.896948 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 17 00:01:35.897018 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 17 00:01:35.897087 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 17 00:01:35.897165 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 17 00:01:35.897235 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 17 00:01:35.897304 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 17 00:01:35.897378 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 17 00:01:35.897459 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 17 00:01:35.897577 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 17 00:01:35.897702 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 17 00:01:35.897781 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 17 00:01:35.897880 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 17 00:01:35.897957 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 17 00:01:35.898024 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 17 00:01:35.898091 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 17 00:01:35.898162 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 17 00:01:35.898231 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 17 00:01:35.898299 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 17 00:01:35.898373 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 17 00:01:35.898441 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 17 00:01:35.898521 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 17 00:01:35.898623 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 17 00:01:35.898705 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 17 00:01:35.898781 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 17 00:01:35.898855 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 17 00:01:35.898928 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 17 00:01:35.898996 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 17 00:01:35.899067 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jan 17 00:01:35.899136 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jan 17 00:01:35.899211 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jan 17 00:01:35.899281 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jan 17 00:01:35.899351 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jan 17 00:01:35.899425 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jan 17 00:01:35.899494 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jan 17 00:01:35.899623 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jan 17 00:01:35.899700 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jan 17 00:01:35.899768 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jan 17 00:01:35.899837 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jan 17 00:01:35.899905 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 17 00:01:35.899978 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jan 17 00:01:35.900046 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 17 00:01:35.900113 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jan 17 00:01:35.900180 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 17 00:01:35.900249 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jan 17 00:01:35.900317 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jan 17 00:01:35.900389 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jan 17 00:01:35.900462 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jan 17 00:01:35.900545 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jan 17 00:01:35.900652 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 17 00:01:35.900760 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jan 17 00:01:35.900835 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 17 00:01:35.900906 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jan 17 00:01:35.900976 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 17 00:01:35.901048 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jan 17 00:01:35.901123 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 17 00:01:35.901194 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jan 17 00:01:35.901264 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 17 00:01:35.901335 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jan 17 00:01:35.901404 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 17 00:01:35.901474 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jan 17 00:01:35.901650 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 17 00:01:35.901729 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jan 17 00:01:35.901804 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 17 00:01:35.901873 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jan 17 00:01:35.901941 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jan 17 00:01:35.902013 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jan 17 00:01:35.902088 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jan 17 00:01:35.902159 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 17 00:01:35.902230 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jan 17 00:01:35.902298 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 17 00:01:35.902368 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 17 00:01:35.902435 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 17 00:01:35.902503 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 17 00:01:35.902696 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jan 17 00:01:35.902788 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 17 00:01:35.902878 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 17 00:01:35.903018 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 17 00:01:35.903095 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 17 00:01:35.903174 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jan 17 00:01:35.903249 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jan 17 00:01:35.903320 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 17 00:01:35.903390 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 17 00:01:35.903467 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 17 00:01:35.903576 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 17 00:01:35.903723 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jan 17 00:01:35.903802 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 17 00:01:35.905776 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 17 00:01:35.905873 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 17 00:01:35.905950 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 17 00:01:35.906029 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jan 17 00:01:35.906110 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Jan 17 00:01:35.906182 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 17 00:01:35.906250 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 17 00:01:35.906318 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 17 00:01:35.906384 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 17 00:01:35.906462 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jan 17 00:01:35.906656 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jan 17 00:01:35.906750 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 17 00:01:35.906826 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 17 00:01:35.906893 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 17 00:01:35.906960 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 17 00:01:35.907076 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jan 17 00:01:35.907154 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jan 17 00:01:35.907226 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jan 17 00:01:35.907297 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 17 00:01:35.907364 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 17 00:01:35.907435 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 17 00:01:35.907503 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 17 00:01:35.907606 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 17 00:01:35.907705 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 17 00:01:35.907781 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 17 00:01:35.907850 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 17 00:01:35.907922 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 17 00:01:35.907992 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 17 00:01:35.908066 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 17 00:01:35.908134 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 17 00:01:35.908205 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 17 00:01:35.908267 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 17 00:01:35.908328 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 17 00:01:35.908403 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 17 00:01:35.908471 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 17 00:01:35.908713 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 17 00:01:35.908842 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 17 00:01:35.908997 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 17 00:01:35.909651 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 17 00:01:35.909791 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 17 00:01:35.909888 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 17 00:01:35.909990 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 17 00:01:35.910095 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 17 00:01:35.910190 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 17 00:01:35.910302 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 17 00:01:35.910419 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 17 00:01:35.912586 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 17 00:01:35.912742 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 17 00:01:35.912836 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 17 00:01:35.912902 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 17 00:01:35.912969 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 17 00:01:35.913041 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 17 00:01:35.913110 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 17 00:01:35.913174 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 17 00:01:35.913250 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 17 00:01:35.913314 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 17 00:01:35.913377 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 17 00:01:35.913464 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 17 00:01:35.913566 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 17 00:01:35.913659 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 17 00:01:35.913672 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 17 00:01:35.913681 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 17 00:01:35.913690 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 17 00:01:35.913698 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 17 00:01:35.913707 kernel: iommu: Default domain type: Translated Jan 17 00:01:35.913715 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 17 00:01:35.913724 kernel: efivars: Registered efivars operations Jan 17 00:01:35.913734 kernel: vgaarb: loaded Jan 17 00:01:35.913743 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 17 00:01:35.913751 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:01:35.913759 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:01:35.913767 kernel: pnp: PnP ACPI init Jan 17 00:01:35.913849 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 17 00:01:35.913861 kernel: pnp: PnP ACPI: found 1 devices Jan 17 00:01:35.913870 kernel: NET: Registered PF_INET protocol family Jan 17 00:01:35.913878 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:01:35.913890 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 00:01:35.913899 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:01:35.913907 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:01:35.913916 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 00:01:35.913924 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 00:01:35.913932 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:01:35.913941 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:01:35.913949 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:01:35.914030 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 17 00:01:35.914045 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:01:35.914054 kernel: kvm [1]: HYP mode not available Jan 17 00:01:35.914062 kernel: Initialise system trusted keyrings Jan 17 00:01:35.914071 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 00:01:35.914079 kernel: Key type asymmetric registered Jan 17 00:01:35.914087 kernel: Asymmetric key parser 'x509' registered Jan 17 00:01:35.914095 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 17 00:01:35.914103 kernel: io scheduler mq-deadline registered Jan 17 00:01:35.914111 kernel: io scheduler kyber registered Jan 17 00:01:35.914121 kernel: io scheduler bfq registered Jan 17 00:01:35.914130 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 17 00:01:35.914206 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 17 00:01:35.914279 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 17 00:01:35.914349 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:01:35.914542 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 17 00:01:35.914650 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 17 00:01:35.914731 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:01:35.914804 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 17 00:01:35.914873 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 17 00:01:35.914944 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:01:35.915017 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 17 00:01:35.915091 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 17 00:01:35.915161 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:01:35.915234 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 17 00:01:35.915304 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 17 00:01:35.915374 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:01:35.915450 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 17 00:01:35.917606 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 17 00:01:35.917732 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:01:35.917810 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 17 00:01:35.917883 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 17 00:01:35.917955 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:01:35.918029 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 17 00:01:35.918108 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 17 00:01:35.918181 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:01:35.918193 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 17 00:01:35.918265 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 17 00:01:35.918335 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 17 00:01:35.918406 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 17 00:01:35.918418 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 17 00:01:35.918429 kernel: ACPI: button: Power Button [PWRB] Jan 17 00:01:35.918438 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 17 00:01:35.919081 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 17 00:01:35.919196 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 17 00:01:35.919209 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:01:35.919218 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 17 00:01:35.919293 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 17 00:01:35.919305 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 17 00:01:35.919313 kernel: thunder_xcv, ver 1.0 Jan 17 00:01:35.919328 kernel: thunder_bgx, ver 1.0 Jan 17 00:01:35.919337 kernel: nicpf, ver 1.0 Jan 17 00:01:35.919345 kernel: nicvf, ver 1.0 Jan 17 00:01:35.919427 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 17 00:01:35.919494 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-17T00:01:35 UTC (1768608095) Jan 17 00:01:35.919505 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 00:01:35.919575 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 17 00:01:35.919584 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 17 00:01:35.919597 kernel: watchdog: Hard watchdog permanently disabled Jan 17 00:01:35.919605 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:01:35.919626 kernel: Segment Routing with IPv6 Jan 17 00:01:35.919634 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:01:35.919643 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:01:35.919651 kernel: Key type dns_resolver registered Jan 17 00:01:35.919659 kernel: registered taskstats version 1 Jan 17 00:01:35.919668 kernel: Loading compiled-in X.509 certificates Jan 17 00:01:35.919676 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 0aabad27df82424bfffc9b1a502a9ae84b35bad4' Jan 17 00:01:35.919687 kernel: Key type .fscrypt registered Jan 17 00:01:35.919695 kernel: Key type fscrypt-provisioning registered Jan 17 00:01:35.919703 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:01:35.919712 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:01:35.919720 kernel: ima: No architecture policies found Jan 17 00:01:35.919728 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 17 00:01:35.919737 kernel: clk: Disabling unused clocks Jan 17 00:01:35.919745 kernel: Freeing unused kernel memory: 39424K Jan 17 00:01:35.919753 kernel: Run /init as init process Jan 17 00:01:35.919763 kernel: with arguments: Jan 17 00:01:35.919772 kernel: /init Jan 17 00:01:35.919780 kernel: with environment: Jan 17 00:01:35.919788 kernel: HOME=/ Jan 17 00:01:35.919796 kernel: TERM=linux Jan 17 00:01:35.919806 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:01:35.919817 systemd[1]: Detected virtualization kvm. Jan 17 00:01:35.919826 systemd[1]: Detected architecture arm64. Jan 17 00:01:35.919836 systemd[1]: Running in initrd. Jan 17 00:01:35.919844 systemd[1]: No hostname configured, using default hostname. Jan 17 00:01:35.919853 systemd[1]: Hostname set to . Jan 17 00:01:35.919862 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:01:35.919871 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:01:35.919880 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:01:35.919889 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:01:35.919898 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:01:35.919909 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:01:35.919918 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:01:35.919929 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:01:35.919940 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:01:35.919949 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:01:35.919958 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:01:35.919968 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:01:35.919979 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:01:35.919988 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:01:35.919997 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:01:35.920006 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:01:35.920015 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:01:35.920024 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:01:35.920033 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:01:35.920042 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:01:35.920053 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:01:35.920062 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:01:35.920071 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:01:35.920080 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:01:35.920090 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:01:35.920099 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:01:35.920108 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:01:35.920116 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:01:35.920125 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:01:35.920136 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:01:35.920145 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:01:35.920154 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:01:35.920163 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:01:35.920199 systemd-journald[237]: Collecting audit messages is disabled. Jan 17 00:01:35.920224 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:01:35.920234 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:01:35.920243 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:01:35.920254 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:01:35.920263 systemd-journald[237]: Journal started Jan 17 00:01:35.920284 systemd-journald[237]: Runtime Journal (/run/log/journal/1e94bf0dc1b04dc6aeb38ea8cef13071) is 8.0M, max 76.6M, 68.6M free. Jan 17 00:01:35.905884 systemd-modules-load[238]: Inserted module 'overlay' Jan 17 00:01:35.922102 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:01:35.923574 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:01:35.931592 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:01:35.932598 kernel: Bridge firewalling registered Jan 17 00:01:35.932192 systemd-modules-load[238]: Inserted module 'br_netfilter' Jan 17 00:01:35.932811 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:01:35.938366 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:01:35.939494 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:01:35.950085 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:01:35.952718 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:01:35.954566 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:01:35.959304 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:01:35.961337 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:01:35.971809 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:01:35.980756 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:01:35.984821 dracut-cmdline[270]: dracut-dracut-053 Jan 17 00:01:35.988669 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 17 00:01:36.011498 systemd-resolved[276]: Positive Trust Anchors: Jan 17 00:01:36.011529 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:01:36.011561 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:01:36.016422 systemd-resolved[276]: Defaulting to hostname 'linux'. Jan 17 00:01:36.018157 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:01:36.020228 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:01:36.088557 kernel: SCSI subsystem initialized Jan 17 00:01:36.092539 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:01:36.100565 kernel: iscsi: registered transport (tcp) Jan 17 00:01:36.114603 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:01:36.114709 kernel: QLogic iSCSI HBA Driver Jan 17 00:01:36.166954 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:01:36.172698 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:01:36.199546 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:01:36.200716 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:01:36.200760 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:01:36.250636 kernel: raid6: neonx8 gen() 15662 MB/s Jan 17 00:01:36.267576 kernel: raid6: neonx4 gen() 15578 MB/s Jan 17 00:01:36.284578 kernel: raid6: neonx2 gen() 13164 MB/s Jan 17 00:01:36.301547 kernel: raid6: neonx1 gen() 10378 MB/s Jan 17 00:01:36.318557 kernel: raid6: int64x8 gen() 6928 MB/s Jan 17 00:01:36.335536 kernel: raid6: int64x4 gen() 7306 MB/s Jan 17 00:01:36.352580 kernel: raid6: int64x2 gen() 6093 MB/s Jan 17 00:01:36.369572 kernel: raid6: int64x1 gen() 5033 MB/s Jan 17 00:01:36.369649 kernel: raid6: using algorithm neonx8 gen() 15662 MB/s Jan 17 00:01:36.386577 kernel: raid6: .... xor() 11564 MB/s, rmw enabled Jan 17 00:01:36.386666 kernel: raid6: using neon recovery algorithm Jan 17 00:01:36.391544 kernel: xor: measuring software checksum speed Jan 17 00:01:36.391593 kernel: 8regs : 19793 MB/sec Jan 17 00:01:36.391640 kernel: 32regs : 17267 MB/sec Jan 17 00:01:36.392763 kernel: arm64_neon : 26892 MB/sec Jan 17 00:01:36.392813 kernel: xor: using function: arm64_neon (26892 MB/sec) Jan 17 00:01:36.444055 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:01:36.457937 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:01:36.463732 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:01:36.485948 systemd-udevd[456]: Using default interface naming scheme 'v255'. Jan 17 00:01:36.489335 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:01:36.498800 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:01:36.512066 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Jan 17 00:01:36.549564 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:01:36.557688 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:01:36.609338 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:01:36.615956 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:01:36.632857 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:01:36.634997 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:01:36.636075 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:01:36.636952 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:01:36.641702 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:01:36.669342 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:01:36.705863 kernel: ACPI: bus type USB registered Jan 17 00:01:36.705920 kernel: usbcore: registered new interface driver usbfs Jan 17 00:01:36.705932 kernel: usbcore: registered new interface driver hub Jan 17 00:01:36.705942 kernel: scsi host0: Virtio SCSI HBA Jan 17 00:01:36.705978 kernel: usbcore: registered new device driver usb Jan 17 00:01:36.724599 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 17 00:01:36.724843 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 17 00:01:36.739605 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:01:36.739751 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:01:36.741950 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:01:36.743029 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:01:36.743505 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:01:36.744786 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:01:36.754136 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:01:36.775557 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 17 00:01:36.775797 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 17 00:01:36.775904 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 17 00:01:36.779866 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:01:36.784007 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 17 00:01:36.784179 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 17 00:01:36.784269 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 17 00:01:36.786001 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:01:36.790567 kernel: hub 1-0:1.0: USB hub found Jan 17 00:01:36.790775 kernel: hub 1-0:1.0: 4 ports detected Jan 17 00:01:36.790871 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 17 00:01:36.794603 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 17 00:01:36.794820 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 17 00:01:36.794927 kernel: hub 2-0:1.0: USB hub found Jan 17 00:01:36.796556 kernel: hub 2-0:1.0: 4 ports detected Jan 17 00:01:36.796765 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 00:01:36.799529 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 17 00:01:36.807036 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 17 00:01:36.807248 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 17 00:01:36.807755 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 17 00:01:36.807907 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 17 00:01:36.809209 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 17 00:01:36.812634 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:01:36.812679 kernel: GPT:17805311 != 80003071 Jan 17 00:01:36.813924 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:01:36.813959 kernel: GPT:17805311 != 80003071 Jan 17 00:01:36.813970 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:01:36.814523 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:01:36.815581 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 17 00:01:36.823540 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:01:36.868544 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (530) Jan 17 00:01:36.872168 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 17 00:01:36.879550 kernel: BTRFS: device fsid 257557f7-4bf9-4b29-86df-93ad67770d31 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (524) Jan 17 00:01:36.880411 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 17 00:01:36.888481 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 17 00:01:36.893577 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 17 00:01:36.896100 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 17 00:01:36.915036 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:01:36.926557 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:01:36.928624 disk-uuid[575]: Primary Header is updated. Jan 17 00:01:36.928624 disk-uuid[575]: Secondary Entries is updated. Jan 17 00:01:36.928624 disk-uuid[575]: Secondary Header is updated. Jan 17 00:01:37.032800 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 17 00:01:37.169670 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 17 00:01:37.169747 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 17 00:01:37.170031 kernel: usbcore: registered new interface driver usbhid Jan 17 00:01:37.170055 kernel: usbhid: USB HID core driver Jan 17 00:01:37.277588 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 17 00:01:37.407591 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 17 00:01:37.461565 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 17 00:01:37.948646 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:01:37.948970 disk-uuid[576]: The operation has completed successfully. Jan 17 00:01:37.999581 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:01:37.999735 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:01:38.009715 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:01:38.029095 sh[594]: Success Jan 17 00:01:38.043859 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 17 00:01:38.102231 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:01:38.105738 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:01:38.107697 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:01:38.125706 kernel: BTRFS info (device dm-0): first mount of filesystem 257557f7-4bf9-4b29-86df-93ad67770d31 Jan 17 00:01:38.125775 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:01:38.125799 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:01:38.126778 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:01:38.126822 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:01:38.135586 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 00:01:38.138162 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:01:38.141395 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:01:38.153831 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:01:38.158877 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:01:38.170062 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:01:38.170116 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:01:38.170596 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:01:38.177550 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:01:38.177601 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:01:38.193396 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:01:38.196366 kernel: BTRFS info (device sda6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:01:38.203464 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:01:38.208798 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:01:38.305825 ignition[672]: Ignition 2.19.0 Jan 17 00:01:38.305836 ignition[672]: Stage: fetch-offline Jan 17 00:01:38.305874 ignition[672]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:01:38.305882 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:01:38.309635 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:01:38.306033 ignition[672]: parsed url from cmdline: "" Jan 17 00:01:38.306036 ignition[672]: no config URL provided Jan 17 00:01:38.306041 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:01:38.306050 ignition[672]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:01:38.306055 ignition[672]: failed to fetch config: resource requires networking Jan 17 00:01:38.306249 ignition[672]: Ignition finished successfully Jan 17 00:01:38.331074 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:01:38.338786 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:01:38.365771 systemd-networkd[781]: lo: Link UP Jan 17 00:01:38.366337 systemd-networkd[781]: lo: Gained carrier Jan 17 00:01:38.368621 systemd-networkd[781]: Enumeration completed Jan 17 00:01:38.368801 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:01:38.369493 systemd[1]: Reached target network.target - Network. Jan 17 00:01:38.370990 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:01:38.370993 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:01:38.371928 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:01:38.371932 systemd-networkd[781]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:01:38.372583 systemd-networkd[781]: eth0: Link UP Jan 17 00:01:38.372586 systemd-networkd[781]: eth0: Gained carrier Jan 17 00:01:38.372594 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:01:38.377797 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:01:38.378794 systemd-networkd[781]: eth1: Link UP Jan 17 00:01:38.378797 systemd-networkd[781]: eth1: Gained carrier Jan 17 00:01:38.378805 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:01:38.393503 ignition[783]: Ignition 2.19.0 Jan 17 00:01:38.393536 ignition[783]: Stage: fetch Jan 17 00:01:38.393768 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:01:38.393782 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:01:38.393884 ignition[783]: parsed url from cmdline: "" Jan 17 00:01:38.393887 ignition[783]: no config URL provided Jan 17 00:01:38.393892 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:01:38.393900 ignition[783]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:01:38.393923 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 17 00:01:38.396097 ignition[783]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 17 00:01:38.418629 systemd-networkd[781]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 17 00:01:38.427619 systemd-networkd[781]: eth0: DHCPv4 address 167.235.246.183/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 17 00:01:38.596370 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 17 00:01:38.602458 ignition[783]: GET result: OK Jan 17 00:01:38.602670 ignition[783]: parsing config with SHA512: 060422e0a2f3af00d2f39ecfa71bf9eb75a7783434bd5a72264ee1bae5ee173dfe2f19fdd630e3b3d1b9d8f6d65cecbcb28b8c9897227f9f67991df5cdfb7312 Jan 17 00:01:38.608630 unknown[783]: fetched base config from "system" Jan 17 00:01:38.609028 ignition[783]: fetch: fetch complete Jan 17 00:01:38.608640 unknown[783]: fetched base config from "system" Jan 17 00:01:38.609033 ignition[783]: fetch: fetch passed Jan 17 00:01:38.608645 unknown[783]: fetched user config from "hetzner" Jan 17 00:01:38.609081 ignition[783]: Ignition finished successfully Jan 17 00:01:38.611340 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:01:38.618777 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:01:38.630840 ignition[790]: Ignition 2.19.0 Jan 17 00:01:38.630849 ignition[790]: Stage: kargs Jan 17 00:01:38.631030 ignition[790]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:01:38.631040 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:01:38.634535 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:01:38.632026 ignition[790]: kargs: kargs passed Jan 17 00:01:38.632086 ignition[790]: Ignition finished successfully Jan 17 00:01:38.640705 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:01:38.652445 ignition[796]: Ignition 2.19.0 Jan 17 00:01:38.652461 ignition[796]: Stage: disks Jan 17 00:01:38.652779 ignition[796]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:01:38.652792 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:01:38.654949 ignition[796]: disks: disks passed Jan 17 00:01:38.657134 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:01:38.655037 ignition[796]: Ignition finished successfully Jan 17 00:01:38.658302 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:01:38.660771 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:01:38.661441 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:01:38.662100 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:01:38.663721 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:01:38.670796 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:01:38.688308 systemd-fsck[805]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 00:01:38.695594 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:01:38.702854 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:01:38.755780 kernel: EXT4-fs (sda9): mounted filesystem b70ce012-b356-4603-a688-ee0b3b7de551 r/w with ordered data mode. Quota mode: none. Jan 17 00:01:38.756773 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:01:38.758328 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:01:38.765672 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:01:38.769799 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:01:38.774062 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 00:01:38.780660 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:01:38.784892 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (813) Jan 17 00:01:38.780698 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:01:38.787967 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:01:38.788016 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:01:38.788029 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:01:38.792305 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:01:38.792351 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:01:38.800130 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:01:38.801427 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:01:38.812797 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:01:38.839290 coreos-metadata[815]: Jan 17 00:01:38.839 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 17 00:01:38.842808 coreos-metadata[815]: Jan 17 00:01:38.842 INFO Fetch successful Jan 17 00:01:38.842808 coreos-metadata[815]: Jan 17 00:01:38.842 INFO wrote hostname ci-4081-3-6-n-089d3b6582 to /sysroot/etc/hostname Jan 17 00:01:38.843813 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:01:38.858418 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:01:38.864496 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:01:38.869174 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:01:38.874135 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:01:38.976894 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:01:38.980811 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:01:38.983766 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:01:38.994569 kernel: BTRFS info (device sda6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:01:39.012104 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:01:39.023556 ignition[931]: INFO : Ignition 2.19.0 Jan 17 00:01:39.023556 ignition[931]: INFO : Stage: mount Jan 17 00:01:39.023556 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:01:39.023556 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:01:39.028000 ignition[931]: INFO : mount: mount passed Jan 17 00:01:39.028000 ignition[931]: INFO : Ignition finished successfully Jan 17 00:01:39.026567 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:01:39.034738 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:01:39.126989 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:01:39.134216 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:01:39.144028 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (942) Jan 17 00:01:39.144090 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:01:39.144114 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:01:39.145527 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:01:39.147999 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 00:01:39.148084 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:01:39.150825 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:01:39.175546 ignition[959]: INFO : Ignition 2.19.0 Jan 17 00:01:39.175546 ignition[959]: INFO : Stage: files Jan 17 00:01:39.175546 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:01:39.175546 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:01:39.178133 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:01:39.181022 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:01:39.181022 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:01:39.184294 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:01:39.186162 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:01:39.187224 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:01:39.186710 unknown[959]: wrote ssh authorized keys file for user: core Jan 17 00:01:39.189523 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 17 00:01:39.189523 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 17 00:01:39.269138 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 00:01:39.343550 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 17 00:01:39.343550 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:01:39.347020 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:01:39.347020 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:01:39.347020 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:01:39.347020 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:01:39.347020 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:01:39.347020 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:01:39.347020 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:01:39.347020 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:01:39.347020 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:01:39.347020 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 17 00:01:39.347020 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 17 00:01:39.347020 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 17 00:01:39.347020 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Jan 17 00:01:39.437023 systemd-networkd[781]: eth1: Gained IPv6LL Jan 17 00:01:39.665054 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 00:01:40.171307 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 17 00:01:40.171307 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 00:01:40.173908 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:01:40.175240 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:01:40.175240 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 00:01:40.175240 ignition[959]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 17 00:01:40.175240 ignition[959]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 17 00:01:40.175240 ignition[959]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 17 00:01:40.175240 ignition[959]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 17 00:01:40.175240 ignition[959]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:01:40.175240 ignition[959]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:01:40.175240 ignition[959]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:01:40.175240 ignition[959]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:01:40.175240 ignition[959]: INFO : files: files passed Jan 17 00:01:40.175240 ignition[959]: INFO : Ignition finished successfully Jan 17 00:01:40.177317 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:01:40.184856 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:01:40.191815 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:01:40.192722 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:01:40.192816 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:01:40.204318 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:01:40.204318 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:01:40.206984 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:01:40.209311 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:01:40.213287 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:01:40.218781 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:01:40.257885 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:01:40.258081 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:01:40.259947 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:01:40.261196 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:01:40.262538 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:01:40.266716 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:01:40.269018 systemd-networkd[781]: eth0: Gained IPv6LL Jan 17 00:01:40.282172 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:01:40.288913 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:01:40.305498 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:01:40.306264 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:01:40.308105 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:01:40.309726 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:01:40.309855 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:01:40.311573 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:01:40.312263 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:01:40.313336 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:01:40.314435 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:01:40.315525 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:01:40.316737 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:01:40.317940 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:01:40.319374 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:01:40.320493 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:01:40.321725 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:01:40.322713 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:01:40.322839 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:01:40.324234 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:01:40.325063 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:01:40.326252 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:01:40.326333 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:01:40.327419 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:01:40.327658 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:01:40.329228 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:01:40.329345 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:01:40.330670 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:01:40.330770 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:01:40.332040 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 00:01:40.332134 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:01:40.339801 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:01:40.344337 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:01:40.346699 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:01:40.346858 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:01:40.348275 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:01:40.348373 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:01:40.355867 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:01:40.357985 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:01:40.365149 ignition[1011]: INFO : Ignition 2.19.0 Jan 17 00:01:40.365149 ignition[1011]: INFO : Stage: umount Jan 17 00:01:40.366500 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:01:40.366500 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 17 00:01:40.369894 ignition[1011]: INFO : umount: umount passed Jan 17 00:01:40.369894 ignition[1011]: INFO : Ignition finished successfully Jan 17 00:01:40.368024 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:01:40.370782 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:01:40.370889 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:01:40.371993 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:01:40.372088 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:01:40.373934 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:01:40.373989 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:01:40.374869 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:01:40.374912 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:01:40.375844 systemd[1]: Stopped target network.target - Network. Jan 17 00:01:40.376741 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:01:40.376806 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:01:40.377811 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:01:40.378685 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:01:40.382570 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:01:40.383992 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:01:40.385864 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:01:40.386755 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:01:40.386800 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:01:40.387637 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:01:40.387683 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:01:40.388477 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:01:40.388560 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:01:40.389476 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:01:40.389539 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:01:40.390661 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:01:40.392479 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:01:40.394006 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:01:40.394105 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:01:40.395353 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:01:40.395456 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:01:40.396580 systemd-networkd[781]: eth1: DHCPv6 lease lost Jan 17 00:01:40.399959 systemd-networkd[781]: eth0: DHCPv6 lease lost Jan 17 00:01:40.402061 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:01:40.402196 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:01:40.405004 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:01:40.405735 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:01:40.407119 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:01:40.407173 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:01:40.422739 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:01:40.423852 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:01:40.423957 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:01:40.426807 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:01:40.426862 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:01:40.427479 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:01:40.427587 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:01:40.429454 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:01:40.429501 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:01:40.431858 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:01:40.451945 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:01:40.452081 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:01:40.455112 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:01:40.455302 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:01:40.457031 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:01:40.457081 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:01:40.458364 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:01:40.458405 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:01:40.459482 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:01:40.459621 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:01:40.461673 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:01:40.461729 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:01:40.463225 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:01:40.463274 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:01:40.469775 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:01:40.470369 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:01:40.470432 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:01:40.472799 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 00:01:40.472844 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:01:40.474527 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:01:40.474573 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:01:40.475263 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:01:40.475306 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:01:40.483478 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:01:40.483682 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:01:40.484697 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:01:40.494938 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:01:40.504342 systemd[1]: Switching root. Jan 17 00:01:40.537553 systemd-journald[237]: Journal stopped Jan 17 00:01:41.428678 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jan 17 00:01:41.428746 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:01:41.428762 kernel: SELinux: policy capability open_perms=1 Jan 17 00:01:41.428772 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:01:41.428787 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:01:41.428796 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:01:41.428806 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:01:41.428815 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:01:41.428824 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:01:41.428834 kernel: audit: type=1403 audit(1768608100.688:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:01:41.428845 systemd[1]: Successfully loaded SELinux policy in 37.250ms. Jan 17 00:01:41.428869 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.970ms. Jan 17 00:01:41.428880 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:01:41.428893 systemd[1]: Detected virtualization kvm. Jan 17 00:01:41.428904 systemd[1]: Detected architecture arm64. Jan 17 00:01:41.428915 systemd[1]: Detected first boot. Jan 17 00:01:41.428925 systemd[1]: Hostname set to . Jan 17 00:01:41.428935 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:01:41.428946 zram_generator::config[1055]: No configuration found. Jan 17 00:01:41.428958 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:01:41.428969 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 00:01:41.428979 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 00:01:41.428990 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 00:01:41.429002 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:01:41.429020 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:01:41.429031 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:01:41.429041 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:01:41.429054 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:01:41.429065 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:01:41.429076 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:01:41.429086 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:01:41.429096 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:01:41.429107 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:01:41.429117 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:01:41.429129 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:01:41.429140 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:01:41.429152 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:01:41.429162 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 17 00:01:41.429172 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:01:41.429183 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 00:01:41.429194 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 00:01:41.429204 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 00:01:41.429216 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:01:41.429226 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:01:41.429237 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:01:41.429247 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:01:41.429258 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:01:41.429269 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:01:41.429279 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:01:41.429289 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:01:41.429300 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:01:41.429312 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:01:41.429322 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:01:41.429333 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:01:41.429343 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:01:41.429355 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:01:41.429366 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:01:41.429376 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:01:41.429386 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:01:41.429397 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:01:41.429409 systemd[1]: Reached target machines.target - Containers. Jan 17 00:01:41.429419 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:01:41.429430 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:01:41.429440 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:01:41.429453 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:01:41.429465 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:01:41.429477 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:01:41.429488 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:01:41.429498 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:01:41.429518 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:01:41.429531 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:01:41.429542 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 00:01:41.429553 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 00:01:41.429563 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 00:01:41.429575 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 00:01:41.429586 kernel: fuse: init (API version 7.39) Jan 17 00:01:41.429596 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:01:41.429614 kernel: loop: module loaded Jan 17 00:01:41.429626 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:01:41.429637 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:01:41.429647 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:01:41.429658 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:01:41.429668 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 00:01:41.429681 systemd[1]: Stopped verity-setup.service. Jan 17 00:01:41.429691 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:01:41.429701 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:01:41.429713 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:01:41.429723 kernel: ACPI: bus type drm_connector registered Jan 17 00:01:41.429734 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:01:41.429745 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:01:41.429760 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:01:41.429770 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:01:41.429781 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:01:41.429813 systemd-journald[1122]: Collecting audit messages is disabled. Jan 17 00:01:41.429837 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:01:41.429850 systemd-journald[1122]: Journal started Jan 17 00:01:41.429872 systemd-journald[1122]: Runtime Journal (/run/log/journal/1e94bf0dc1b04dc6aeb38ea8cef13071) is 8.0M, max 76.6M, 68.6M free. Jan 17 00:01:41.173694 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:01:41.196135 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 17 00:01:41.196852 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 00:01:41.431970 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:01:41.432012 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:01:41.435015 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:01:41.436264 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:01:41.436992 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:01:41.443072 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:01:41.443341 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:01:41.448875 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:01:41.449044 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:01:41.451175 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:01:41.452642 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:01:41.454346 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:01:41.456707 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:01:41.463970 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:01:41.469445 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:01:41.478496 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:01:41.483700 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:01:41.486246 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:01:41.486281 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:01:41.487949 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:01:41.500796 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:01:41.509807 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:01:41.510808 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:01:41.514116 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:01:41.524706 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:01:41.526667 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:01:41.529731 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:01:41.530621 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:01:41.538750 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:01:41.541428 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:01:41.545687 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:01:41.547884 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:01:41.549585 systemd-journald[1122]: Time spent on flushing to /var/log/journal/1e94bf0dc1b04dc6aeb38ea8cef13071 is 42.708ms for 1121 entries. Jan 17 00:01:41.549585 systemd-journald[1122]: System Journal (/var/log/journal/1e94bf0dc1b04dc6aeb38ea8cef13071) is 8.0M, max 584.8M, 576.8M free. Jan 17 00:01:41.614958 systemd-journald[1122]: Received client request to flush runtime journal. Jan 17 00:01:41.615022 kernel: loop0: detected capacity change from 0 to 8 Jan 17 00:01:41.553334 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:01:41.556765 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:01:41.558574 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:01:41.579135 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:01:41.585784 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:01:41.602494 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:01:41.605811 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:01:41.616354 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:01:41.624048 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:01:41.626592 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:01:41.645127 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:01:41.647596 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:01:41.654162 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:01:41.659008 udevadm[1175]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 00:01:41.662754 kernel: loop1: detected capacity change from 0 to 200800 Jan 17 00:01:41.670110 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Jan 17 00:01:41.670124 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Jan 17 00:01:41.683429 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:01:41.693723 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:01:41.727581 kernel: loop2: detected capacity change from 0 to 114328 Jan 17 00:01:41.752836 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:01:41.762887 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:01:41.773569 kernel: loop3: detected capacity change from 0 to 114432 Jan 17 00:01:41.777935 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jan 17 00:01:41.778278 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jan 17 00:01:41.782872 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:01:41.812565 kernel: loop4: detected capacity change from 0 to 8 Jan 17 00:01:41.816541 kernel: loop5: detected capacity change from 0 to 200800 Jan 17 00:01:41.838548 kernel: loop6: detected capacity change from 0 to 114328 Jan 17 00:01:41.862554 kernel: loop7: detected capacity change from 0 to 114432 Jan 17 00:01:41.878917 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 17 00:01:41.879375 (sd-merge)[1198]: Merged extensions into '/usr'. Jan 17 00:01:41.887250 systemd[1]: Reloading requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:01:41.887275 systemd[1]: Reloading... Jan 17 00:01:42.005743 zram_generator::config[1224]: No configuration found. Jan 17 00:01:42.085139 ldconfig[1163]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:01:42.103699 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:01:42.149753 systemd[1]: Reloading finished in 261 ms. Jan 17 00:01:42.174501 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:01:42.177743 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:01:42.186803 systemd[1]: Starting ensure-sysext.service... Jan 17 00:01:42.193753 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:01:42.201816 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:01:42.201835 systemd[1]: Reloading... Jan 17 00:01:42.252394 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:01:42.252695 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:01:42.253322 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:01:42.255630 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Jan 17 00:01:42.255696 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Jan 17 00:01:42.260876 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:01:42.260889 systemd-tmpfiles[1262]: Skipping /boot Jan 17 00:01:42.276142 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:01:42.276157 systemd-tmpfiles[1262]: Skipping /boot Jan 17 00:01:42.293543 zram_generator::config[1291]: No configuration found. Jan 17 00:01:42.403984 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:01:42.450082 systemd[1]: Reloading finished in 247 ms. Jan 17 00:01:42.467685 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:01:42.469594 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:01:42.488037 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:01:42.493750 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:01:42.497726 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:01:42.503981 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:01:42.506694 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:01:42.509951 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:01:42.517734 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:01:42.519849 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:01:42.522778 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:01:42.529820 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:01:42.531772 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:01:42.532474 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:01:42.533696 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:01:42.545503 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:01:42.548024 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:01:42.548846 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:01:42.551031 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:01:42.562993 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:01:42.564304 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:01:42.564720 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:01:42.570070 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:01:42.575735 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:01:42.576440 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:01:42.576854 systemd[1]: Finished ensure-sysext.service. Jan 17 00:01:42.588844 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 00:01:42.590568 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:01:42.596764 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:01:42.603191 systemd-udevd[1338]: Using default interface naming scheme 'v255'. Jan 17 00:01:42.603976 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:01:42.604925 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:01:42.607875 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:01:42.608024 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:01:42.610994 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:01:42.619027 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:01:42.619219 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:01:42.620938 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:01:42.623929 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:01:42.624096 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:01:42.636230 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:01:42.650934 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:01:42.652480 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:01:42.655137 augenrules[1369]: No rules Jan 17 00:01:42.657583 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:01:42.659787 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:01:42.771677 systemd-networkd[1370]: lo: Link UP Jan 17 00:01:42.772001 systemd-networkd[1370]: lo: Gained carrier Jan 17 00:01:42.772630 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 00:01:42.773422 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:01:42.790746 systemd-resolved[1337]: Positive Trust Anchors: Jan 17 00:01:42.790764 systemd-resolved[1337]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:01:42.790796 systemd-resolved[1337]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:01:42.791254 systemd-networkd[1370]: Enumeration completed Jan 17 00:01:42.791659 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:01:42.800692 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:01:42.801639 systemd-resolved[1337]: Using system hostname 'ci-4081-3-6-n-089d3b6582'. Jan 17 00:01:42.802146 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 17 00:01:42.806094 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:01:42.806913 systemd[1]: Reached target network.target - Network. Jan 17 00:01:42.808111 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:01:42.877537 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:01:42.887845 systemd-networkd[1370]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:01:42.888669 systemd-networkd[1370]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:01:42.889368 systemd-networkd[1370]: eth1: Link UP Jan 17 00:01:42.889372 systemd-networkd[1370]: eth1: Gained carrier Jan 17 00:01:42.889388 systemd-networkd[1370]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:01:42.916088 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 17 00:01:42.916221 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:01:42.929017 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:01:42.932452 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:01:42.936742 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:01:42.937671 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:01:42.937717 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:01:42.938041 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:01:42.939915 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:01:42.943531 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 17 00:01:42.945532 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 17 00:01:42.945624 kernel: [drm] features: -context_init Jan 17 00:01:42.947546 kernel: [drm] number of scanouts: 1 Jan 17 00:01:42.947587 kernel: [drm] number of cap sets: 0 Jan 17 00:01:42.948808 systemd-networkd[1370]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 17 00:01:42.950090 systemd-timesyncd[1355]: Network configuration changed, trying to establish connection. Jan 17 00:01:42.950771 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:01:42.951269 systemd-networkd[1370]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:01:42.952245 systemd-networkd[1370]: eth0: Link UP Jan 17 00:01:42.952251 systemd-networkd[1370]: eth0: Gained carrier Jan 17 00:01:42.952265 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:01:42.952984 systemd-timesyncd[1355]: Network configuration changed, trying to establish connection. Jan 17 00:01:42.955561 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 17 00:01:42.956210 systemd-timesyncd[1355]: Network configuration changed, trying to establish connection. Jan 17 00:01:42.962596 kernel: Console: switching to colour frame buffer device 160x50 Jan 17 00:01:42.967497 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:01:42.968133 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:01:42.971709 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 17 00:01:42.975140 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:01:42.975774 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:01:42.978975 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:01:42.979042 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:01:43.008661 systemd-networkd[1370]: eth0: DHCPv4 address 167.235.246.183/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 17 00:01:43.009283 systemd-timesyncd[1355]: Network configuration changed, trying to establish connection. Jan 17 00:01:43.023001 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:01:43.043546 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1396) Jan 17 00:01:43.053175 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:01:43.053505 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:01:43.057576 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 17 00:01:43.070783 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:01:43.074757 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:01:43.087863 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:01:43.140378 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:01:43.175773 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:01:43.183807 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:01:43.198643 lvm[1445]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:01:43.228583 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:01:43.229916 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:01:43.230996 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:01:43.231996 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:01:43.232844 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:01:43.233836 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:01:43.234706 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:01:43.235480 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:01:43.236273 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:01:43.236307 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:01:43.236887 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:01:43.238136 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:01:43.240131 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:01:43.253882 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:01:43.256703 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:01:43.258130 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:01:43.259075 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:01:43.259778 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:01:43.260395 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:01:43.260425 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:01:43.262717 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:01:43.266994 lvm[1449]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:01:43.282867 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 00:01:43.290793 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:01:43.298667 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:01:43.308295 jq[1455]: false Jan 17 00:01:43.311868 coreos-metadata[1451]: Jan 17 00:01:43.311 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 17 00:01:43.311342 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:01:43.312085 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:01:43.315550 coreos-metadata[1451]: Jan 17 00:01:43.313 INFO Fetch successful Jan 17 00:01:43.315550 coreos-metadata[1451]: Jan 17 00:01:43.313 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 17 00:01:43.313836 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:01:43.318540 coreos-metadata[1451]: Jan 17 00:01:43.315 INFO Fetch successful Jan 17 00:01:43.317989 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:01:43.321230 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 17 00:01:43.324268 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:01:43.328739 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:01:43.335848 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:01:43.338409 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:01:43.338925 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:01:43.339914 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:01:43.343195 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:01:43.344745 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:01:43.348935 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:01:43.349133 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:01:43.374045 dbus-daemon[1452]: [system] SELinux support is enabled Jan 17 00:01:43.374250 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:01:43.378741 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:01:43.378780 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:01:43.379540 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:01:43.379560 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:01:43.388245 extend-filesystems[1456]: Found loop4 Jan 17 00:01:43.396104 extend-filesystems[1456]: Found loop5 Jan 17 00:01:43.396104 extend-filesystems[1456]: Found loop6 Jan 17 00:01:43.396104 extend-filesystems[1456]: Found loop7 Jan 17 00:01:43.389187 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:01:43.411801 extend-filesystems[1456]: Found sda Jan 17 00:01:43.411801 extend-filesystems[1456]: Found sda1 Jan 17 00:01:43.411801 extend-filesystems[1456]: Found sda2 Jan 17 00:01:43.411801 extend-filesystems[1456]: Found sda3 Jan 17 00:01:43.411801 extend-filesystems[1456]: Found usr Jan 17 00:01:43.411801 extend-filesystems[1456]: Found sda4 Jan 17 00:01:43.411801 extend-filesystems[1456]: Found sda6 Jan 17 00:01:43.411801 extend-filesystems[1456]: Found sda7 Jan 17 00:01:43.411801 extend-filesystems[1456]: Found sda9 Jan 17 00:01:43.411801 extend-filesystems[1456]: Checking size of /dev/sda9 Jan 17 00:01:43.389387 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:01:43.390398 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:01:43.439499 jq[1465]: true Jan 17 00:01:43.392835 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:01:43.430856 (ntainerd)[1486]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:01:43.446288 tar[1473]: linux-arm64/LICENSE Jan 17 00:01:43.446288 tar[1473]: linux-arm64/helm Jan 17 00:01:43.453057 extend-filesystems[1456]: Resized partition /dev/sda9 Jan 17 00:01:43.457813 extend-filesystems[1499]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:01:43.469209 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 17 00:01:43.483524 update_engine[1464]: I20260117 00:01:43.480947 1464 main.cc:92] Flatcar Update Engine starting Jan 17 00:01:43.491252 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 00:01:43.494497 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:01:43.495472 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:01:43.495768 update_engine[1464]: I20260117 00:01:43.495539 1464 update_check_scheduler.cc:74] Next update check in 7m8s Jan 17 00:01:43.498817 jq[1492]: true Jan 17 00:01:43.507758 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:01:43.532561 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1386) Jan 17 00:01:43.585200 systemd-logind[1463]: New seat seat0. Jan 17 00:01:43.594924 systemd-logind[1463]: Watching system buttons on /dev/input/event0 (Power Button) Jan 17 00:01:43.594953 systemd-logind[1463]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 17 00:01:43.595867 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:01:43.644441 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 17 00:01:43.665773 extend-filesystems[1499]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 17 00:01:43.665773 extend-filesystems[1499]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 17 00:01:43.665773 extend-filesystems[1499]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 17 00:01:43.672696 extend-filesystems[1456]: Resized filesystem in /dev/sda9 Jan 17 00:01:43.672696 extend-filesystems[1456]: Found sr0 Jan 17 00:01:43.666727 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:01:43.678437 bash[1523]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:01:43.666932 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:01:43.676895 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:01:43.686868 systemd[1]: Starting sshkeys.service... Jan 17 00:01:43.721103 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 00:01:43.732311 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 00:01:43.777353 containerd[1486]: time="2026-01-17T00:01:43.777251080Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:01:43.787644 coreos-metadata[1535]: Jan 17 00:01:43.787 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 17 00:01:43.789104 coreos-metadata[1535]: Jan 17 00:01:43.789 INFO Fetch successful Jan 17 00:01:43.792549 unknown[1535]: wrote ssh authorized keys file for user: core Jan 17 00:01:43.821339 locksmithd[1505]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:01:43.826428 update-ssh-keys[1542]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:01:43.828404 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 00:01:43.836853 systemd[1]: Finished sshkeys.service. Jan 17 00:01:43.838797 containerd[1486]: time="2026-01-17T00:01:43.838754440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:01:43.840234 containerd[1486]: time="2026-01-17T00:01:43.840195760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:01:43.840319 containerd[1486]: time="2026-01-17T00:01:43.840305720Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:01:43.840396 containerd[1486]: time="2026-01-17T00:01:43.840383160Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:01:43.840689 containerd[1486]: time="2026-01-17T00:01:43.840657680Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:01:43.840835 containerd[1486]: time="2026-01-17T00:01:43.840817480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:01:43.841029 containerd[1486]: time="2026-01-17T00:01:43.841007400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:01:43.841139 containerd[1486]: time="2026-01-17T00:01:43.841123680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:01:43.841481 containerd[1486]: time="2026-01-17T00:01:43.841457600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:01:43.841577 containerd[1486]: time="2026-01-17T00:01:43.841563560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:01:43.841668 containerd[1486]: time="2026-01-17T00:01:43.841652400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:01:43.841732 containerd[1486]: time="2026-01-17T00:01:43.841719160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:01:43.841866 containerd[1486]: time="2026-01-17T00:01:43.841850000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:01:43.842148 containerd[1486]: time="2026-01-17T00:01:43.842127520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:01:43.842340 containerd[1486]: time="2026-01-17T00:01:43.842321000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:01:43.842395 containerd[1486]: time="2026-01-17T00:01:43.842383400Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:01:43.842589 containerd[1486]: time="2026-01-17T00:01:43.842506080Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:01:43.842796 containerd[1486]: time="2026-01-17T00:01:43.842777400Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:01:43.847994 containerd[1486]: time="2026-01-17T00:01:43.847956000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:01:43.848543 containerd[1486]: time="2026-01-17T00:01:43.848145200Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:01:43.848543 containerd[1486]: time="2026-01-17T00:01:43.848173280Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:01:43.848543 containerd[1486]: time="2026-01-17T00:01:43.848192040Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:01:43.848543 containerd[1486]: time="2026-01-17T00:01:43.848206520Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:01:43.848543 containerd[1486]: time="2026-01-17T00:01:43.848361080Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:01:43.848997 containerd[1486]: time="2026-01-17T00:01:43.848975800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:01:43.849177 containerd[1486]: time="2026-01-17T00:01:43.849158120Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:01:43.849249 containerd[1486]: time="2026-01-17T00:01:43.849236520Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:01:43.849314 containerd[1486]: time="2026-01-17T00:01:43.849299960Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:01:43.849367 containerd[1486]: time="2026-01-17T00:01:43.849354960Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:01:43.849424 containerd[1486]: time="2026-01-17T00:01:43.849411760Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:01:43.849477 containerd[1486]: time="2026-01-17T00:01:43.849464520Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:01:43.849557 containerd[1486]: time="2026-01-17T00:01:43.849543280Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:01:43.850108 containerd[1486]: time="2026-01-17T00:01:43.849613000Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:01:43.850108 containerd[1486]: time="2026-01-17T00:01:43.849633800Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:01:43.850108 containerd[1486]: time="2026-01-17T00:01:43.849650400Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:01:43.850108 containerd[1486]: time="2026-01-17T00:01:43.849664720Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:01:43.850108 containerd[1486]: time="2026-01-17T00:01:43.849686120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:01:43.850108 containerd[1486]: time="2026-01-17T00:01:43.849699760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:01:43.850108 containerd[1486]: time="2026-01-17T00:01:43.849712560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:01:43.850108 containerd[1486]: time="2026-01-17T00:01:43.849726760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:01:43.850108 containerd[1486]: time="2026-01-17T00:01:43.849739200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:01:43.850108 containerd[1486]: time="2026-01-17T00:01:43.849752800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:01:43.850108 containerd[1486]: time="2026-01-17T00:01:43.849764520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:01:43.850108 containerd[1486]: time="2026-01-17T00:01:43.849778720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:01:43.850108 containerd[1486]: time="2026-01-17T00:01:43.849791480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:01:43.850108 containerd[1486]: time="2026-01-17T00:01:43.849806880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:01:43.850375 containerd[1486]: time="2026-01-17T00:01:43.849819040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:01:43.850375 containerd[1486]: time="2026-01-17T00:01:43.849831840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:01:43.850375 containerd[1486]: time="2026-01-17T00:01:43.849846800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:01:43.850375 containerd[1486]: time="2026-01-17T00:01:43.849862920Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:01:43.850375 containerd[1486]: time="2026-01-17T00:01:43.849890080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:01:43.850375 containerd[1486]: time="2026-01-17T00:01:43.849903280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:01:43.850375 containerd[1486]: time="2026-01-17T00:01:43.849914160Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:01:43.850375 containerd[1486]: time="2026-01-17T00:01:43.850035520Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:01:43.850375 containerd[1486]: time="2026-01-17T00:01:43.850056360Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:01:43.850906 containerd[1486]: time="2026-01-17T00:01:43.850088160Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:01:43.851006 containerd[1486]: time="2026-01-17T00:01:43.850986720Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:01:43.851061 containerd[1486]: time="2026-01-17T00:01:43.851049240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:01:43.851174 containerd[1486]: time="2026-01-17T00:01:43.851158320Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:01:43.851229 containerd[1486]: time="2026-01-17T00:01:43.851217840Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:01:43.851996 containerd[1486]: time="2026-01-17T00:01:43.851290360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:01:43.852048 containerd[1486]: time="2026-01-17T00:01:43.851741800Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:01:43.852048 containerd[1486]: time="2026-01-17T00:01:43.851814240Z" level=info msg="Connect containerd service" Jan 17 00:01:43.852048 containerd[1486]: time="2026-01-17T00:01:43.851846880Z" level=info msg="using legacy CRI server" Jan 17 00:01:43.852048 containerd[1486]: time="2026-01-17T00:01:43.851854200Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:01:43.852349 containerd[1486]: time="2026-01-17T00:01:43.852328680Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:01:43.853301 containerd[1486]: time="2026-01-17T00:01:43.853271800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:01:43.853743 containerd[1486]: time="2026-01-17T00:01:43.853645040Z" level=info msg="Start subscribing containerd event" Jan 17 00:01:43.853743 containerd[1486]: time="2026-01-17T00:01:43.853718160Z" level=info msg="Start recovering state" Jan 17 00:01:43.853800 containerd[1486]: time="2026-01-17T00:01:43.853785520Z" level=info msg="Start event monitor" Jan 17 00:01:43.853800 containerd[1486]: time="2026-01-17T00:01:43.853796640Z" level=info msg="Start snapshots syncer" Jan 17 00:01:43.853836 containerd[1486]: time="2026-01-17T00:01:43.853807040Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:01:43.853836 containerd[1486]: time="2026-01-17T00:01:43.853814480Z" level=info msg="Start streaming server" Jan 17 00:01:43.854101 containerd[1486]: time="2026-01-17T00:01:43.854074640Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:01:43.854196 containerd[1486]: time="2026-01-17T00:01:43.854183160Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:01:43.854298 containerd[1486]: time="2026-01-17T00:01:43.854283800Z" level=info msg="containerd successfully booted in 0.082259s" Jan 17 00:01:43.854376 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:01:44.057572 sshd_keygen[1484]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:01:44.081463 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:01:44.092246 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:01:44.098846 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:01:44.099442 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:01:44.108935 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:01:44.122240 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:01:44.129473 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:01:44.136973 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 17 00:01:44.137979 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:01:44.164548 tar[1473]: linux-arm64/README.md Jan 17 00:01:44.172653 systemd-networkd[1370]: eth0: Gained IPv6LL Jan 17 00:01:44.173306 systemd-timesyncd[1355]: Network configuration changed, trying to establish connection. Jan 17 00:01:44.179201 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:01:44.182895 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:01:44.189793 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:01:44.194418 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:01:44.197552 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:01:44.218965 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:01:44.236863 systemd-networkd[1370]: eth1: Gained IPv6LL Jan 17 00:01:44.237577 systemd-timesyncd[1355]: Network configuration changed, trying to establish connection. Jan 17 00:01:44.954777 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:01:44.957160 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:01:44.957856 (kubelet)[1583]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:01:44.965123 systemd[1]: Startup finished in 794ms (kernel) + 4.996s (initrd) + 4.313s (userspace) = 10.104s. Jan 17 00:01:45.405423 kubelet[1583]: E0117 00:01:45.405348 1583 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:01:45.410253 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:01:45.410448 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:01:55.642217 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:01:55.648804 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:01:55.766900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:01:55.773873 (kubelet)[1601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:01:55.820714 kubelet[1601]: E0117 00:01:55.820643 1601 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:01:55.824505 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:01:55.824861 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:02:05.891904 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:02:05.902840 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:02:06.034770 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:02:06.035020 (kubelet)[1617]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:02:06.079366 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:02:06.089445 systemd[1]: Started sshd@0-167.235.246.183:22-4.153.228.146:48746.service - OpenSSH per-connection server daemon (4.153.228.146:48746). Jan 17 00:02:06.093736 kubelet[1617]: E0117 00:02:06.093688 1617 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:02:06.097765 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:02:06.097926 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:02:06.700465 sshd[1625]: Accepted publickey for core from 4.153.228.146 port 48746 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:02:06.701265 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:02:06.714721 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:02:06.721957 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:02:06.726430 systemd-logind[1463]: New session 1 of user core. Jan 17 00:02:06.736631 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:02:06.743071 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:02:06.759797 (systemd)[1630]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:02:06.871295 systemd[1630]: Queued start job for default target default.target. Jan 17 00:02:06.884989 systemd[1630]: Created slice app.slice - User Application Slice. Jan 17 00:02:06.885280 systemd[1630]: Reached target paths.target - Paths. Jan 17 00:02:06.885457 systemd[1630]: Reached target timers.target - Timers. Jan 17 00:02:06.887730 systemd[1630]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:02:06.902108 systemd[1630]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:02:06.902289 systemd[1630]: Reached target sockets.target - Sockets. Jan 17 00:02:06.902319 systemd[1630]: Reached target basic.target - Basic System. Jan 17 00:02:06.902407 systemd[1630]: Reached target default.target - Main User Target. Jan 17 00:02:06.902453 systemd[1630]: Startup finished in 135ms. Jan 17 00:02:06.902884 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:02:06.911180 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:02:07.359946 systemd[1]: Started sshd@1-167.235.246.183:22-4.153.228.146:48748.service - OpenSSH per-connection server daemon (4.153.228.146:48748). Jan 17 00:02:07.981294 sshd[1641]: Accepted publickey for core from 4.153.228.146 port 48748 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:02:07.983767 sshd[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:02:07.989443 systemd-logind[1463]: New session 2 of user core. Jan 17 00:02:07.999883 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:02:08.424322 sshd[1641]: pam_unix(sshd:session): session closed for user core Jan 17 00:02:08.428692 systemd-logind[1463]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:02:08.429418 systemd[1]: sshd@1-167.235.246.183:22-4.153.228.146:48748.service: Deactivated successfully. Jan 17 00:02:08.431378 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:02:08.432463 systemd-logind[1463]: Removed session 2. Jan 17 00:02:08.543228 systemd[1]: Started sshd@2-167.235.246.183:22-4.153.228.146:48764.service - OpenSSH per-connection server daemon (4.153.228.146:48764). Jan 17 00:02:09.165365 sshd[1648]: Accepted publickey for core from 4.153.228.146 port 48764 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:02:09.167488 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:02:09.172647 systemd-logind[1463]: New session 3 of user core. Jan 17 00:02:09.181802 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:02:09.605050 sshd[1648]: pam_unix(sshd:session): session closed for user core Jan 17 00:02:09.610956 systemd-logind[1463]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:02:09.611799 systemd[1]: sshd@2-167.235.246.183:22-4.153.228.146:48764.service: Deactivated successfully. Jan 17 00:02:09.613891 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:02:09.617189 systemd-logind[1463]: Removed session 3. Jan 17 00:02:09.715851 systemd[1]: Started sshd@3-167.235.246.183:22-4.153.228.146:48776.service - OpenSSH per-connection server daemon (4.153.228.146:48776). Jan 17 00:02:10.312439 sshd[1655]: Accepted publickey for core from 4.153.228.146 port 48776 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:02:10.314827 sshd[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:02:10.320946 systemd-logind[1463]: New session 4 of user core. Jan 17 00:02:10.327896 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:02:10.747030 sshd[1655]: pam_unix(sshd:session): session closed for user core Jan 17 00:02:10.754812 systemd[1]: sshd@3-167.235.246.183:22-4.153.228.146:48776.service: Deactivated successfully. Jan 17 00:02:10.756332 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:02:10.761652 systemd-logind[1463]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:02:10.765335 systemd-logind[1463]: Removed session 4. Jan 17 00:02:10.867008 systemd[1]: Started sshd@4-167.235.246.183:22-4.153.228.146:48790.service - OpenSSH per-connection server daemon (4.153.228.146:48790). Jan 17 00:02:11.474769 sshd[1662]: Accepted publickey for core from 4.153.228.146 port 48790 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:02:11.477600 sshd[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:02:11.486137 systemd-logind[1463]: New session 5 of user core. Jan 17 00:02:11.499759 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:02:11.824253 sudo[1665]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:02:11.824610 sudo[1665]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:02:11.839052 sudo[1665]: pam_unix(sudo:session): session closed for user root Jan 17 00:02:11.937281 sshd[1662]: pam_unix(sshd:session): session closed for user core Jan 17 00:02:11.944051 systemd-logind[1463]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:02:11.944940 systemd[1]: sshd@4-167.235.246.183:22-4.153.228.146:48790.service: Deactivated successfully. Jan 17 00:02:11.946735 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:02:11.947885 systemd-logind[1463]: Removed session 5. Jan 17 00:02:12.048093 systemd[1]: Started sshd@5-167.235.246.183:22-4.153.228.146:48804.service - OpenSSH per-connection server daemon (4.153.228.146:48804). Jan 17 00:02:12.664358 sshd[1670]: Accepted publickey for core from 4.153.228.146 port 48804 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:02:12.666892 sshd[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:02:12.672535 systemd-logind[1463]: New session 6 of user core. Jan 17 00:02:12.678904 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:02:13.001360 sudo[1674]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:02:13.001861 sudo[1674]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:02:13.006050 sudo[1674]: pam_unix(sudo:session): session closed for user root Jan 17 00:02:13.012003 sudo[1673]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:02:13.012361 sudo[1673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:02:13.036014 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:02:13.039165 auditctl[1677]: No rules Jan 17 00:02:13.040269 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:02:13.040505 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:02:13.042721 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:02:13.073082 augenrules[1695]: No rules Jan 17 00:02:13.075658 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:02:13.078765 sudo[1673]: pam_unix(sudo:session): session closed for user root Jan 17 00:02:13.176070 sshd[1670]: pam_unix(sshd:session): session closed for user core Jan 17 00:02:13.181735 systemd[1]: sshd@5-167.235.246.183:22-4.153.228.146:48804.service: Deactivated successfully. Jan 17 00:02:13.185053 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:02:13.187911 systemd-logind[1463]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:02:13.189309 systemd-logind[1463]: Removed session 6. Jan 17 00:02:13.293968 systemd[1]: Started sshd@6-167.235.246.183:22-4.153.228.146:48820.service - OpenSSH per-connection server daemon (4.153.228.146:48820). Jan 17 00:02:13.913743 sshd[1703]: Accepted publickey for core from 4.153.228.146 port 48820 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:02:13.915670 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:02:13.922856 systemd-logind[1463]: New session 7 of user core. Jan 17 00:02:13.934901 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:02:14.256494 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:02:14.256949 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:02:14.561500 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:02:15.015691 systemd-resolved[1337]: Clock change detected. Flushing caches. Jan 17 00:02:15.016145 systemd-timesyncd[1355]: Contacted time server 168.119.211.223:123 (2.flatcar.pool.ntp.org). Jan 17 00:02:15.016203 systemd-timesyncd[1355]: Initial clock synchronization to Sat 2026-01-17 00:02:15.015637 UTC. Jan 17 00:02:15.018047 (dockerd)[1721]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:02:15.264643 dockerd[1721]: time="2026-01-17T00:02:15.264049994Z" level=info msg="Starting up" Jan 17 00:02:15.336905 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport699846898-merged.mount: Deactivated successfully. Jan 17 00:02:15.350415 systemd[1]: var-lib-docker-metacopy\x2dcheck3741682335-merged.mount: Deactivated successfully. Jan 17 00:02:15.363207 dockerd[1721]: time="2026-01-17T00:02:15.362901834Z" level=info msg="Loading containers: start." Jan 17 00:02:15.456544 kernel: Initializing XFRM netlink socket Jan 17 00:02:15.533838 systemd-networkd[1370]: docker0: Link UP Jan 17 00:02:15.554974 dockerd[1721]: time="2026-01-17T00:02:15.554871394Z" level=info msg="Loading containers: done." Jan 17 00:02:15.571514 dockerd[1721]: time="2026-01-17T00:02:15.571446714Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:02:15.571705 dockerd[1721]: time="2026-01-17T00:02:15.571614714Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:02:15.571766 dockerd[1721]: time="2026-01-17T00:02:15.571741514Z" level=info msg="Daemon has completed initialization" Jan 17 00:02:15.613762 dockerd[1721]: time="2026-01-17T00:02:15.613631394Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:02:15.615796 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:02:16.334598 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3568192723-merged.mount: Deactivated successfully. Jan 17 00:02:16.588111 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 00:02:16.594842 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:02:16.679500 containerd[1486]: time="2026-01-17T00:02:16.679191114Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 17 00:02:16.717625 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:02:16.730246 (kubelet)[1867]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:02:16.783759 kubelet[1867]: E0117 00:02:16.783712 1867 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:02:16.787639 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:02:16.787928 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:02:17.353948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount262044948.mount: Deactivated successfully. Jan 17 00:02:18.423038 containerd[1486]: time="2026-01-17T00:02:18.421813994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:18.423038 containerd[1486]: time="2026-01-17T00:02:18.422991714Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=24571138" Jan 17 00:02:18.423653 containerd[1486]: time="2026-01-17T00:02:18.423619354Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:18.426413 containerd[1486]: time="2026-01-17T00:02:18.426378874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:18.427814 containerd[1486]: time="2026-01-17T00:02:18.427760154Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 1.74852608s" Jan 17 00:02:18.427814 containerd[1486]: time="2026-01-17T00:02:18.427804074Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Jan 17 00:02:18.428629 containerd[1486]: time="2026-01-17T00:02:18.428405594Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 17 00:02:19.607873 containerd[1486]: time="2026-01-17T00:02:19.607831154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:19.609708 containerd[1486]: time="2026-01-17T00:02:19.609619074Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19135497" Jan 17 00:02:19.610444 containerd[1486]: time="2026-01-17T00:02:19.610407314Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:19.614348 containerd[1486]: time="2026-01-17T00:02:19.614240314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:19.615965 containerd[1486]: time="2026-01-17T00:02:19.615554234Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 1.18711348s" Jan 17 00:02:19.615965 containerd[1486]: time="2026-01-17T00:02:19.615621434Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Jan 17 00:02:19.616407 containerd[1486]: time="2026-01-17T00:02:19.616380114Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 17 00:02:20.846541 containerd[1486]: time="2026-01-17T00:02:20.845461674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:20.846942 containerd[1486]: time="2026-01-17T00:02:20.846899714Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=14191736" Jan 17 00:02:20.847530 containerd[1486]: time="2026-01-17T00:02:20.847423274Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:20.850930 containerd[1486]: time="2026-01-17T00:02:20.850890714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:20.853497 containerd[1486]: time="2026-01-17T00:02:20.853268834Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 1.23677256s" Jan 17 00:02:20.853497 containerd[1486]: time="2026-01-17T00:02:20.853333274Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Jan 17 00:02:20.854689 containerd[1486]: time="2026-01-17T00:02:20.854501794Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 17 00:02:21.893503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2062075598.mount: Deactivated successfully. Jan 17 00:02:22.196473 containerd[1486]: time="2026-01-17T00:02:22.196316434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:22.198130 containerd[1486]: time="2026-01-17T00:02:22.197862194Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=22805279" Jan 17 00:02:22.201856 containerd[1486]: time="2026-01-17T00:02:22.200314154Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:22.203429 containerd[1486]: time="2026-01-17T00:02:22.203378874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:22.204293 containerd[1486]: time="2026-01-17T00:02:22.204255474Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.34965144s" Jan 17 00:02:22.204403 containerd[1486]: time="2026-01-17T00:02:22.204386074Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Jan 17 00:02:22.205331 containerd[1486]: time="2026-01-17T00:02:22.205285714Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 17 00:02:22.803967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2159195272.mount: Deactivated successfully. Jan 17 00:02:23.725715 containerd[1486]: time="2026-01-17T00:02:23.724695954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:23.728050 containerd[1486]: time="2026-01-17T00:02:23.727257594Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395498" Jan 17 00:02:23.729389 containerd[1486]: time="2026-01-17T00:02:23.729292874Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:23.735345 containerd[1486]: time="2026-01-17T00:02:23.735276114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:23.738418 containerd[1486]: time="2026-01-17T00:02:23.737118994Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.53179056s" Jan 17 00:02:23.738418 containerd[1486]: time="2026-01-17T00:02:23.737171354Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Jan 17 00:02:23.739874 containerd[1486]: time="2026-01-17T00:02:23.739832674Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 17 00:02:24.315408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount596115064.mount: Deactivated successfully. Jan 17 00:02:24.321455 containerd[1486]: time="2026-01-17T00:02:24.321395194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:24.323393 containerd[1486]: time="2026-01-17T00:02:24.322932674Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268729" Jan 17 00:02:24.323393 containerd[1486]: time="2026-01-17T00:02:24.322971794Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:24.325809 containerd[1486]: time="2026-01-17T00:02:24.325753314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:24.326880 containerd[1486]: time="2026-01-17T00:02:24.326733594Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 586.85796ms" Jan 17 00:02:24.326880 containerd[1486]: time="2026-01-17T00:02:24.326775354Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Jan 17 00:02:24.327896 containerd[1486]: time="2026-01-17T00:02:24.327634634Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 17 00:02:24.963791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount863819809.mount: Deactivated successfully. Jan 17 00:02:26.838198 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 17 00:02:26.846887 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:02:26.963093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:02:26.968321 (kubelet)[2061]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:02:27.028120 kubelet[2061]: E0117 00:02:27.028017 2061 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:02:27.031613 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:02:27.031777 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:02:28.189023 containerd[1486]: time="2026-01-17T00:02:28.188953674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:28.191133 containerd[1486]: time="2026-01-17T00:02:28.191074394Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=98063043" Jan 17 00:02:28.192088 containerd[1486]: time="2026-01-17T00:02:28.191533914Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:28.195302 containerd[1486]: time="2026-01-17T00:02:28.195258194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:28.196806 containerd[1486]: time="2026-01-17T00:02:28.196757634Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 3.8690782s" Jan 17 00:02:28.196806 containerd[1486]: time="2026-01-17T00:02:28.196804154Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Jan 17 00:02:29.321209 update_engine[1464]: I20260117 00:02:29.321117 1464 update_attempter.cc:509] Updating boot flags... Jan 17 00:02:29.373520 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2104) Jan 17 00:02:34.325308 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:02:34.333924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:02:34.369470 systemd[1]: Reloading requested from client PID 2117 ('systemctl') (unit session-7.scope)... Jan 17 00:02:34.369486 systemd[1]: Reloading... Jan 17 00:02:34.510726 zram_generator::config[2166]: No configuration found. Jan 17 00:02:34.601474 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:02:34.672422 systemd[1]: Reloading finished in 302 ms. Jan 17 00:02:34.751032 (kubelet)[2197]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:02:34.755036 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:02:34.756023 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:02:34.756401 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:02:34.759232 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:02:34.888799 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:02:34.895849 (kubelet)[2208]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:02:34.942990 kubelet[2208]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:02:34.942990 kubelet[2208]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:02:34.943392 kubelet[2208]: I0117 00:02:34.943020 2208 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:02:35.681558 kubelet[2208]: I0117 00:02:35.681065 2208 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 17 00:02:35.681558 kubelet[2208]: I0117 00:02:35.681103 2208 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:02:35.681558 kubelet[2208]: I0117 00:02:35.681136 2208 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 17 00:02:35.681558 kubelet[2208]: I0117 00:02:35.681142 2208 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:02:35.681558 kubelet[2208]: I0117 00:02:35.681391 2208 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:02:35.690088 kubelet[2208]: E0117 00:02:35.690032 2208 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://167.235.246.183:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 167.235.246.183:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:02:35.691012 kubelet[2208]: I0117 00:02:35.690944 2208 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:02:35.695885 kubelet[2208]: E0117 00:02:35.695841 2208 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:02:35.696011 kubelet[2208]: I0117 00:02:35.695913 2208 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 17 00:02:35.698713 kubelet[2208]: I0117 00:02:35.698647 2208 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 17 00:02:35.699033 kubelet[2208]: I0117 00:02:35.698978 2208 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:02:35.700141 kubelet[2208]: I0117 00:02:35.699015 2208 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-089d3b6582","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:02:35.700297 kubelet[2208]: I0117 00:02:35.700214 2208 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:02:35.700297 kubelet[2208]: I0117 00:02:35.700227 2208 container_manager_linux.go:306] "Creating device plugin manager" Jan 17 00:02:35.700735 kubelet[2208]: I0117 00:02:35.700700 2208 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 17 00:02:35.705184 kubelet[2208]: I0117 00:02:35.705131 2208 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:02:35.706488 kubelet[2208]: I0117 00:02:35.706447 2208 kubelet.go:475] "Attempting to sync node with API server" Jan 17 00:02:35.706488 kubelet[2208]: I0117 00:02:35.706478 2208 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:02:35.707131 kubelet[2208]: I0117 00:02:35.707096 2208 kubelet.go:387] "Adding apiserver pod source" Jan 17 00:02:35.707131 kubelet[2208]: I0117 00:02:35.707126 2208 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:02:35.707411 kubelet[2208]: E0117 00:02:35.707378 2208 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://167.235.246.183:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-089d3b6582&limit=500&resourceVersion=0\": dial tcp 167.235.246.183:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:02:35.711595 kubelet[2208]: E0117 00:02:35.709888 2208 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://167.235.246.183:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 167.235.246.183:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:02:35.712587 kubelet[2208]: I0117 00:02:35.712135 2208 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:02:35.713089 kubelet[2208]: I0117 00:02:35.713065 2208 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:02:35.713196 kubelet[2208]: I0117 00:02:35.713184 2208 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 17 00:02:35.713322 kubelet[2208]: W0117 00:02:35.713307 2208 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:02:35.718979 kubelet[2208]: I0117 00:02:35.718954 2208 server.go:1262] "Started kubelet" Jan 17 00:02:35.721882 kubelet[2208]: I0117 00:02:35.721849 2208 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:02:35.724979 kubelet[2208]: E0117 00:02:35.723458 2208 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://167.235.246.183:6443/api/v1/namespaces/default/events\": dial tcp 167.235.246.183:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-089d3b6582.188b5bbf447c2852 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-089d3b6582,UID:ci-4081-3-6-n-089d3b6582,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-089d3b6582,},FirstTimestamp:2026-01-17 00:02:35.718920274 +0000 UTC m=+0.818649281,LastTimestamp:2026-01-17 00:02:35.718920274 +0000 UTC m=+0.818649281,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-089d3b6582,}" Jan 17 00:02:35.725737 kubelet[2208]: I0117 00:02:35.725691 2208 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:02:35.727043 kubelet[2208]: I0117 00:02:35.726994 2208 server.go:310] "Adding debug handlers to kubelet server" Jan 17 00:02:35.730954 kubelet[2208]: I0117 00:02:35.730884 2208 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:02:35.731054 kubelet[2208]: I0117 00:02:35.730969 2208 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 17 00:02:35.731176 kubelet[2208]: I0117 00:02:35.731157 2208 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:02:35.731458 kubelet[2208]: I0117 00:02:35.731437 2208 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:02:35.734894 kubelet[2208]: I0117 00:02:35.734150 2208 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 17 00:02:35.734894 kubelet[2208]: E0117 00:02:35.734421 2208 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-089d3b6582\" not found" Jan 17 00:02:35.735038 kubelet[2208]: E0117 00:02:35.734990 2208 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://167.235.246.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-089d3b6582?timeout=10s\": dial tcp 167.235.246.183:6443: connect: connection refused" interval="200ms" Jan 17 00:02:35.737175 kubelet[2208]: E0117 00:02:35.737138 2208 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:02:35.737422 kubelet[2208]: I0117 00:02:35.737392 2208 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:02:35.737422 kubelet[2208]: I0117 00:02:35.737417 2208 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:02:35.737626 kubelet[2208]: I0117 00:02:35.737595 2208 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:02:35.738312 kubelet[2208]: I0117 00:02:35.738291 2208 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 00:02:35.738459 kubelet[2208]: I0117 00:02:35.738448 2208 reconciler.go:29] "Reconciler: start to sync state" Jan 17 00:02:35.747118 kubelet[2208]: I0117 00:02:35.747050 2208 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 17 00:02:35.748347 kubelet[2208]: I0117 00:02:35.748292 2208 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 17 00:02:35.748347 kubelet[2208]: I0117 00:02:35.748326 2208 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 17 00:02:35.748347 kubelet[2208]: I0117 00:02:35.748357 2208 kubelet.go:2427] "Starting kubelet main sync loop" Jan 17 00:02:35.748498 kubelet[2208]: E0117 00:02:35.748404 2208 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:02:35.755960 kubelet[2208]: E0117 00:02:35.755918 2208 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://167.235.246.183:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 167.235.246.183:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:02:35.756243 kubelet[2208]: E0117 00:02:35.756216 2208 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://167.235.246.183:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 167.235.246.183:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:02:35.772592 kubelet[2208]: I0117 00:02:35.772562 2208 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:02:35.772592 kubelet[2208]: I0117 00:02:35.772586 2208 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:02:35.772736 kubelet[2208]: I0117 00:02:35.772610 2208 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:02:35.776469 kubelet[2208]: I0117 00:02:35.776317 2208 policy_none.go:49] "None policy: Start" Jan 17 00:02:35.776469 kubelet[2208]: I0117 00:02:35.776370 2208 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 17 00:02:35.776469 kubelet[2208]: I0117 00:02:35.776393 2208 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 17 00:02:35.778391 kubelet[2208]: I0117 00:02:35.778331 2208 policy_none.go:47] "Start" Jan 17 00:02:35.784212 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 00:02:35.802048 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 00:02:35.807649 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 00:02:35.820660 kubelet[2208]: E0117 00:02:35.820308 2208 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:02:35.823114 kubelet[2208]: I0117 00:02:35.821529 2208 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:02:35.823114 kubelet[2208]: I0117 00:02:35.821804 2208 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:02:35.823114 kubelet[2208]: I0117 00:02:35.822365 2208 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:02:35.824551 kubelet[2208]: E0117 00:02:35.824450 2208 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:02:35.824551 kubelet[2208]: E0117 00:02:35.824527 2208 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-089d3b6582\" not found" Jan 17 00:02:35.865369 systemd[1]: Created slice kubepods-burstable-pod014ed805460c88bb2dac956a887edff1.slice - libcontainer container kubepods-burstable-pod014ed805460c88bb2dac956a887edff1.slice. Jan 17 00:02:35.881398 kubelet[2208]: E0117 00:02:35.881316 2208 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-089d3b6582\" not found" node="ci-4081-3-6-n-089d3b6582" Jan 17 00:02:35.885733 systemd[1]: Created slice kubepods-burstable-podb857e3e114f9365d0857edd8c754623f.slice - libcontainer container kubepods-burstable-podb857e3e114f9365d0857edd8c754623f.slice. Jan 17 00:02:35.902496 kubelet[2208]: E0117 00:02:35.902167 2208 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-089d3b6582\" not found" node="ci-4081-3-6-n-089d3b6582" Jan 17 00:02:35.906932 systemd[1]: Created slice kubepods-burstable-pod0740d2521bb62474643d607c17501686.slice - libcontainer container kubepods-burstable-pod0740d2521bb62474643d607c17501686.slice. Jan 17 00:02:35.910257 kubelet[2208]: E0117 00:02:35.910223 2208 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-089d3b6582\" not found" node="ci-4081-3-6-n-089d3b6582" Jan 17 00:02:35.924409 kubelet[2208]: I0117 00:02:35.924177 2208 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-089d3b6582" Jan 17 00:02:35.925261 kubelet[2208]: E0117 00:02:35.925103 2208 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://167.235.246.183:6443/api/v1/nodes\": dial tcp 167.235.246.183:6443: connect: connection refused" node="ci-4081-3-6-n-089d3b6582" Jan 17 00:02:35.936036 kubelet[2208]: E0117 00:02:35.935879 2208 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://167.235.246.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-089d3b6582?timeout=10s\": dial tcp 167.235.246.183:6443: connect: connection refused" interval="400ms" Jan 17 00:02:35.939453 kubelet[2208]: I0117 00:02:35.939106 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/014ed805460c88bb2dac956a887edff1-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-089d3b6582\" (UID: \"014ed805460c88bb2dac956a887edff1\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-089d3b6582" Jan 17 00:02:35.939453 kubelet[2208]: I0117 00:02:35.939162 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b857e3e114f9365d0857edd8c754623f-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-089d3b6582\" (UID: \"b857e3e114f9365d0857edd8c754623f\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-089d3b6582" Jan 17 00:02:35.939453 kubelet[2208]: I0117 00:02:35.939202 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b857e3e114f9365d0857edd8c754623f-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-089d3b6582\" (UID: \"b857e3e114f9365d0857edd8c754623f\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-089d3b6582" Jan 17 00:02:35.939453 kubelet[2208]: I0117 00:02:35.939236 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b857e3e114f9365d0857edd8c754623f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-089d3b6582\" (UID: \"b857e3e114f9365d0857edd8c754623f\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-089d3b6582" Jan 17 00:02:35.939453 kubelet[2208]: I0117 00:02:35.939266 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/014ed805460c88bb2dac956a887edff1-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-089d3b6582\" (UID: \"014ed805460c88bb2dac956a887edff1\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-089d3b6582" Jan 17 00:02:35.940055 kubelet[2208]: I0117 00:02:35.939296 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/014ed805460c88bb2dac956a887edff1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-089d3b6582\" (UID: \"014ed805460c88bb2dac956a887edff1\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-089d3b6582" Jan 17 00:02:35.940055 kubelet[2208]: I0117 00:02:35.939330 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b857e3e114f9365d0857edd8c754623f-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-089d3b6582\" (UID: \"b857e3e114f9365d0857edd8c754623f\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-089d3b6582" Jan 17 00:02:35.940055 kubelet[2208]: I0117 00:02:35.939373 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b857e3e114f9365d0857edd8c754623f-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-089d3b6582\" (UID: \"b857e3e114f9365d0857edd8c754623f\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-089d3b6582" Jan 17 00:02:35.940055 kubelet[2208]: I0117 00:02:35.939408 2208 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0740d2521bb62474643d607c17501686-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-089d3b6582\" (UID: \"0740d2521bb62474643d607c17501686\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-089d3b6582" Jan 17 00:02:36.128364 kubelet[2208]: I0117 00:02:36.128209 2208 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-089d3b6582" Jan 17 00:02:36.128913 kubelet[2208]: E0117 00:02:36.128680 2208 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://167.235.246.183:6443/api/v1/nodes\": dial tcp 167.235.246.183:6443: connect: connection refused" node="ci-4081-3-6-n-089d3b6582" Jan 17 00:02:36.186396 containerd[1486]: time="2026-01-17T00:02:36.186245194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-089d3b6582,Uid:014ed805460c88bb2dac956a887edff1,Namespace:kube-system,Attempt:0,}" Jan 17 00:02:36.207695 containerd[1486]: time="2026-01-17T00:02:36.207084034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-089d3b6582,Uid:b857e3e114f9365d0857edd8c754623f,Namespace:kube-system,Attempt:0,}" Jan 17 00:02:36.215970 containerd[1486]: time="2026-01-17T00:02:36.215092554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-089d3b6582,Uid:0740d2521bb62474643d607c17501686,Namespace:kube-system,Attempt:0,}" Jan 17 00:02:36.337426 kubelet[2208]: E0117 00:02:36.337347 2208 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://167.235.246.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-089d3b6582?timeout=10s\": dial tcp 167.235.246.183:6443: connect: connection refused" interval="800ms" Jan 17 00:02:36.532100 kubelet[2208]: I0117 00:02:36.531887 2208 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-089d3b6582" Jan 17 00:02:36.532786 kubelet[2208]: E0117 00:02:36.532294 2208 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://167.235.246.183:6443/api/v1/nodes\": dial tcp 167.235.246.183:6443: connect: connection refused" node="ci-4081-3-6-n-089d3b6582" Jan 17 00:02:36.667081 kubelet[2208]: E0117 00:02:36.667015 2208 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://167.235.246.183:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 167.235.246.183:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:02:36.756416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4113775389.mount: Deactivated successfully. Jan 17 00:02:36.765084 containerd[1486]: time="2026-01-17T00:02:36.765029074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:02:36.766740 containerd[1486]: time="2026-01-17T00:02:36.766116834Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jan 17 00:02:36.767414 containerd[1486]: time="2026-01-17T00:02:36.767376394Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:02:36.768704 containerd[1486]: time="2026-01-17T00:02:36.768664274Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:02:36.771553 containerd[1486]: time="2026-01-17T00:02:36.770326714Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:02:36.771858 containerd[1486]: time="2026-01-17T00:02:36.771831634Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:02:36.772501 containerd[1486]: time="2026-01-17T00:02:36.772454394Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:02:36.775730 containerd[1486]: time="2026-01-17T00:02:36.775678634Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:02:36.777021 containerd[1486]: time="2026-01-17T00:02:36.776979514Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 561.79516ms" Jan 17 00:02:36.779076 containerd[1486]: time="2026-01-17T00:02:36.779036874Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 592.68836ms" Jan 17 00:02:36.781488 containerd[1486]: time="2026-01-17T00:02:36.781452874Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 574.2574ms" Jan 17 00:02:36.889595 kubelet[2208]: E0117 00:02:36.889470 2208 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://167.235.246.183:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 167.235.246.183:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:02:36.904723 containerd[1486]: time="2026-01-17T00:02:36.904629514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:36.905137 containerd[1486]: time="2026-01-17T00:02:36.905005194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:36.905266 containerd[1486]: time="2026-01-17T00:02:36.905241154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:36.906117 containerd[1486]: time="2026-01-17T00:02:36.906059074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:36.911684 containerd[1486]: time="2026-01-17T00:02:36.911517874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:36.911827 containerd[1486]: time="2026-01-17T00:02:36.911648634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:36.911827 containerd[1486]: time="2026-01-17T00:02:36.911682594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:36.911827 containerd[1486]: time="2026-01-17T00:02:36.911794834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:36.916440 containerd[1486]: time="2026-01-17T00:02:36.914725034Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:36.916440 containerd[1486]: time="2026-01-17T00:02:36.916176314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:36.916440 containerd[1486]: time="2026-01-17T00:02:36.916188194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:36.916440 containerd[1486]: time="2026-01-17T00:02:36.916274234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:36.932758 systemd[1]: Started cri-containerd-6aa7353d6e5f5f6ca5b20fa1cd5f7530b57aa9bfd0aff0eb28802736ddadee3e.scope - libcontainer container 6aa7353d6e5f5f6ca5b20fa1cd5f7530b57aa9bfd0aff0eb28802736ddadee3e. Jan 17 00:02:36.940731 kubelet[2208]: E0117 00:02:36.940432 2208 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://167.235.246.183:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-089d3b6582&limit=500&resourceVersion=0\": dial tcp 167.235.246.183:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:02:36.946697 systemd[1]: Started cri-containerd-0cd3ed69c40727955fbe858084442ec0e1a248105601922126aaf776e04275b9.scope - libcontainer container 0cd3ed69c40727955fbe858084442ec0e1a248105601922126aaf776e04275b9. Jan 17 00:02:36.953033 systemd[1]: Started cri-containerd-f40bc843b149ecfcec3e42d0a824bbffe271c64bdc7fdb57bb09ca7229f16c5e.scope - libcontainer container f40bc843b149ecfcec3e42d0a824bbffe271c64bdc7fdb57bb09ca7229f16c5e. Jan 17 00:02:36.989105 containerd[1486]: time="2026-01-17T00:02:36.988980874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-089d3b6582,Uid:014ed805460c88bb2dac956a887edff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"6aa7353d6e5f5f6ca5b20fa1cd5f7530b57aa9bfd0aff0eb28802736ddadee3e\"" Jan 17 00:02:37.001064 containerd[1486]: time="2026-01-17T00:02:37.001013154Z" level=info msg="CreateContainer within sandbox \"6aa7353d6e5f5f6ca5b20fa1cd5f7530b57aa9bfd0aff0eb28802736ddadee3e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:02:37.024959 containerd[1486]: time="2026-01-17T00:02:37.024614714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-089d3b6582,Uid:b857e3e114f9365d0857edd8c754623f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f40bc843b149ecfcec3e42d0a824bbffe271c64bdc7fdb57bb09ca7229f16c5e\"" Jan 17 00:02:37.030297 containerd[1486]: time="2026-01-17T00:02:37.030165074Z" level=info msg="CreateContainer within sandbox \"f40bc843b149ecfcec3e42d0a824bbffe271c64bdc7fdb57bb09ca7229f16c5e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:02:37.037292 containerd[1486]: time="2026-01-17T00:02:37.036948274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-089d3b6582,Uid:0740d2521bb62474643d607c17501686,Namespace:kube-system,Attempt:0,} returns sandbox id \"0cd3ed69c40727955fbe858084442ec0e1a248105601922126aaf776e04275b9\"" Jan 17 00:02:37.037987 containerd[1486]: time="2026-01-17T00:02:37.037466674Z" level=info msg="CreateContainer within sandbox \"6aa7353d6e5f5f6ca5b20fa1cd5f7530b57aa9bfd0aff0eb28802736ddadee3e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8bbf545df95204a72f5de92e94d78345825b5e4bccde128bc82e03fdb474649f\"" Jan 17 00:02:37.039393 containerd[1486]: time="2026-01-17T00:02:37.039268554Z" level=info msg="StartContainer for \"8bbf545df95204a72f5de92e94d78345825b5e4bccde128bc82e03fdb474649f\"" Jan 17 00:02:37.043606 containerd[1486]: time="2026-01-17T00:02:37.043470394Z" level=info msg="CreateContainer within sandbox \"0cd3ed69c40727955fbe858084442ec0e1a248105601922126aaf776e04275b9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:02:37.050388 containerd[1486]: time="2026-01-17T00:02:37.050341434Z" level=info msg="CreateContainer within sandbox \"f40bc843b149ecfcec3e42d0a824bbffe271c64bdc7fdb57bb09ca7229f16c5e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8397795e2ba17b458e645aae1ef980c3dbf7d17466445cf965aac1328cc27019\"" Jan 17 00:02:37.051608 containerd[1486]: time="2026-01-17T00:02:37.051578914Z" level=info msg="StartContainer for \"8397795e2ba17b458e645aae1ef980c3dbf7d17466445cf965aac1328cc27019\"" Jan 17 00:02:37.061767 containerd[1486]: time="2026-01-17T00:02:37.061634114Z" level=info msg="CreateContainer within sandbox \"0cd3ed69c40727955fbe858084442ec0e1a248105601922126aaf776e04275b9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bbf44df4c1ca1dd1b0b8ea7068c31ee9c4b6bdf9c176afcf6d2bda8e1ec7d3fc\"" Jan 17 00:02:37.062809 containerd[1486]: time="2026-01-17T00:02:37.062778434Z" level=info msg="StartContainer for \"bbf44df4c1ca1dd1b0b8ea7068c31ee9c4b6bdf9c176afcf6d2bda8e1ec7d3fc\"" Jan 17 00:02:37.076420 systemd[1]: Started cri-containerd-8bbf545df95204a72f5de92e94d78345825b5e4bccde128bc82e03fdb474649f.scope - libcontainer container 8bbf545df95204a72f5de92e94d78345825b5e4bccde128bc82e03fdb474649f. Jan 17 00:02:37.104736 systemd[1]: Started cri-containerd-8397795e2ba17b458e645aae1ef980c3dbf7d17466445cf965aac1328cc27019.scope - libcontainer container 8397795e2ba17b458e645aae1ef980c3dbf7d17466445cf965aac1328cc27019. Jan 17 00:02:37.113853 systemd[1]: Started cri-containerd-bbf44df4c1ca1dd1b0b8ea7068c31ee9c4b6bdf9c176afcf6d2bda8e1ec7d3fc.scope - libcontainer container bbf44df4c1ca1dd1b0b8ea7068c31ee9c4b6bdf9c176afcf6d2bda8e1ec7d3fc. Jan 17 00:02:37.138885 kubelet[2208]: E0117 00:02:37.138840 2208 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://167.235.246.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-089d3b6582?timeout=10s\": dial tcp 167.235.246.183:6443: connect: connection refused" interval="1.6s" Jan 17 00:02:37.142977 containerd[1486]: time="2026-01-17T00:02:37.141694114Z" level=info msg="StartContainer for \"8bbf545df95204a72f5de92e94d78345825b5e4bccde128bc82e03fdb474649f\" returns successfully" Jan 17 00:02:37.174566 containerd[1486]: time="2026-01-17T00:02:37.173789794Z" level=info msg="StartContainer for \"8397795e2ba17b458e645aae1ef980c3dbf7d17466445cf965aac1328cc27019\" returns successfully" Jan 17 00:02:37.179094 containerd[1486]: time="2026-01-17T00:02:37.179041634Z" level=info msg="StartContainer for \"bbf44df4c1ca1dd1b0b8ea7068c31ee9c4b6bdf9c176afcf6d2bda8e1ec7d3fc\" returns successfully" Jan 17 00:02:37.264226 kubelet[2208]: E0117 00:02:37.264170 2208 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://167.235.246.183:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 167.235.246.183:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:02:37.335137 kubelet[2208]: I0117 00:02:37.334159 2208 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-089d3b6582" Jan 17 00:02:37.778152 kubelet[2208]: E0117 00:02:37.777764 2208 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-089d3b6582\" not found" node="ci-4081-3-6-n-089d3b6582" Jan 17 00:02:37.780028 kubelet[2208]: E0117 00:02:37.779992 2208 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-089d3b6582\" not found" node="ci-4081-3-6-n-089d3b6582" Jan 17 00:02:37.784816 kubelet[2208]: E0117 00:02:37.784785 2208 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-089d3b6582\" not found" node="ci-4081-3-6-n-089d3b6582" Jan 17 00:02:38.788324 kubelet[2208]: E0117 00:02:38.787801 2208 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-089d3b6582\" not found" node="ci-4081-3-6-n-089d3b6582" Jan 17 00:02:38.788324 kubelet[2208]: E0117 00:02:38.788190 2208 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-089d3b6582\" not found" node="ci-4081-3-6-n-089d3b6582" Jan 17 00:02:39.995858 kubelet[2208]: E0117 00:02:39.995815 2208 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-n-089d3b6582\" not found" node="ci-4081-3-6-n-089d3b6582" Jan 17 00:02:40.060541 kubelet[2208]: E0117 00:02:40.059801 2208 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-6-n-089d3b6582.188b5bbf447c2852 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-089d3b6582,UID:ci-4081-3-6-n-089d3b6582,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-089d3b6582,},FirstTimestamp:2026-01-17 00:02:35.718920274 +0000 UTC m=+0.818649281,LastTimestamp:2026-01-17 00:02:35.718920274 +0000 UTC m=+0.818649281,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-089d3b6582,}" Jan 17 00:02:40.110020 kubelet[2208]: I0117 00:02:40.109394 2208 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-089d3b6582" Jan 17 00:02:40.135720 kubelet[2208]: I0117 00:02:40.135661 2208 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-089d3b6582" Jan 17 00:02:40.159133 kubelet[2208]: E0117 00:02:40.159067 2208 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-089d3b6582\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-089d3b6582" Jan 17 00:02:40.159133 kubelet[2208]: I0117 00:02:40.159111 2208 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-089d3b6582" Jan 17 00:02:40.167984 kubelet[2208]: E0117 00:02:40.167930 2208 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-089d3b6582\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-089d3b6582" Jan 17 00:02:40.167984 kubelet[2208]: I0117 00:02:40.167969 2208 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-089d3b6582" Jan 17 00:02:40.172607 kubelet[2208]: E0117 00:02:40.172213 2208 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-089d3b6582\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-089d3b6582" Jan 17 00:02:40.712234 kubelet[2208]: I0117 00:02:40.711328 2208 apiserver.go:52] "Watching apiserver" Jan 17 00:02:40.738820 kubelet[2208]: I0117 00:02:40.738723 2208 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 00:02:41.079380 kubelet[2208]: I0117 00:02:41.079021 2208 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-089d3b6582" Jan 17 00:02:42.360710 systemd[1]: Reloading requested from client PID 2489 ('systemctl') (unit session-7.scope)... Jan 17 00:02:42.360734 systemd[1]: Reloading... Jan 17 00:02:42.466575 zram_generator::config[2535]: No configuration found. Jan 17 00:02:42.551503 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:02:42.635137 systemd[1]: Reloading finished in 274 ms. Jan 17 00:02:42.675226 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:02:42.689144 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:02:42.689485 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:02:42.689604 systemd[1]: kubelet.service: Consumed 1.283s CPU time, 121.4M memory peak, 0B memory swap peak. Jan 17 00:02:42.696919 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:02:42.846618 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:02:42.855890 (kubelet)[2574]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:02:42.919630 kubelet[2574]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:02:42.919630 kubelet[2574]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:02:42.919630 kubelet[2574]: I0117 00:02:42.919329 2574 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:02:42.928659 kubelet[2574]: I0117 00:02:42.928620 2574 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 17 00:02:42.929144 kubelet[2574]: I0117 00:02:42.928944 2574 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:02:42.929144 kubelet[2574]: I0117 00:02:42.928989 2574 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 17 00:02:42.929144 kubelet[2574]: I0117 00:02:42.928995 2574 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:02:42.934159 kubelet[2574]: I0117 00:02:42.934121 2574 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:02:42.936396 kubelet[2574]: I0117 00:02:42.936367 2574 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 17 00:02:42.941948 kubelet[2574]: I0117 00:02:42.941693 2574 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:02:42.947490 kubelet[2574]: E0117 00:02:42.947436 2574 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:02:42.947728 kubelet[2574]: I0117 00:02:42.947579 2574 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 17 00:02:42.951567 kubelet[2574]: I0117 00:02:42.950708 2574 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 17 00:02:42.951567 kubelet[2574]: I0117 00:02:42.950966 2574 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:02:42.951567 kubelet[2574]: I0117 00:02:42.950993 2574 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-089d3b6582","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:02:42.951567 kubelet[2574]: I0117 00:02:42.951154 2574 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:02:42.951810 kubelet[2574]: I0117 00:02:42.951164 2574 container_manager_linux.go:306] "Creating device plugin manager" Jan 17 00:02:42.951810 kubelet[2574]: I0117 00:02:42.951192 2574 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 17 00:02:42.952193 kubelet[2574]: I0117 00:02:42.952160 2574 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:02:42.952444 kubelet[2574]: I0117 00:02:42.952429 2574 kubelet.go:475] "Attempting to sync node with API server" Jan 17 00:02:42.952480 kubelet[2574]: I0117 00:02:42.952453 2574 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:02:42.953210 kubelet[2574]: I0117 00:02:42.953185 2574 kubelet.go:387] "Adding apiserver pod source" Jan 17 00:02:42.953282 kubelet[2574]: I0117 00:02:42.953224 2574 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:02:42.956525 kubelet[2574]: I0117 00:02:42.956416 2574 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:02:42.958821 kubelet[2574]: I0117 00:02:42.958758 2574 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:02:42.959266 kubelet[2574]: I0117 00:02:42.959048 2574 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 17 00:02:42.966524 kubelet[2574]: I0117 00:02:42.964815 2574 server.go:1262] "Started kubelet" Jan 17 00:02:42.969039 kubelet[2574]: I0117 00:02:42.969013 2574 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:02:42.972516 kubelet[2574]: I0117 00:02:42.971260 2574 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:02:42.975198 kubelet[2574]: I0117 00:02:42.975171 2574 server.go:310] "Adding debug handlers to kubelet server" Jan 17 00:02:42.980883 kubelet[2574]: I0117 00:02:42.980823 2574 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:02:42.981066 kubelet[2574]: I0117 00:02:42.981053 2574 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 17 00:02:42.981292 kubelet[2574]: I0117 00:02:42.981276 2574 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:02:42.983247 kubelet[2574]: I0117 00:02:42.983221 2574 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:02:42.985280 kubelet[2574]: I0117 00:02:42.985258 2574 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 17 00:02:42.985620 kubelet[2574]: E0117 00:02:42.985603 2574 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-089d3b6582\" not found" Jan 17 00:02:42.994156 kubelet[2574]: I0117 00:02:42.994126 2574 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 00:02:42.994417 kubelet[2574]: I0117 00:02:42.994406 2574 reconciler.go:29] "Reconciler: start to sync state" Jan 17 00:02:43.001249 kubelet[2574]: I0117 00:02:42.998837 2574 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 17 00:02:43.002886 kubelet[2574]: I0117 00:02:43.002471 2574 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 17 00:02:43.002886 kubelet[2574]: I0117 00:02:43.002514 2574 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 17 00:02:43.002886 kubelet[2574]: I0117 00:02:43.002569 2574 kubelet.go:2427] "Starting kubelet main sync loop" Jan 17 00:02:43.002886 kubelet[2574]: E0117 00:02:43.002623 2574 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:02:43.022021 kubelet[2574]: E0117 00:02:43.021991 2574 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:02:43.022163 kubelet[2574]: I0117 00:02:43.022006 2574 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:02:43.022218 kubelet[2574]: I0117 00:02:43.022209 2574 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:02:43.022350 kubelet[2574]: I0117 00:02:43.022332 2574 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:02:43.085238 kubelet[2574]: I0117 00:02:43.085208 2574 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:02:43.085415 kubelet[2574]: I0117 00:02:43.085401 2574 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:02:43.085476 kubelet[2574]: I0117 00:02:43.085468 2574 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:02:43.085815 kubelet[2574]: I0117 00:02:43.085784 2574 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:02:43.085940 kubelet[2574]: I0117 00:02:43.085914 2574 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:02:43.086004 kubelet[2574]: I0117 00:02:43.085997 2574 policy_none.go:49] "None policy: Start" Jan 17 00:02:43.086074 kubelet[2574]: I0117 00:02:43.086066 2574 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 17 00:02:43.086177 kubelet[2574]: I0117 00:02:43.086164 2574 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 17 00:02:43.086400 kubelet[2574]: I0117 00:02:43.086377 2574 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 17 00:02:43.086465 kubelet[2574]: I0117 00:02:43.086456 2574 policy_none.go:47] "Start" Jan 17 00:02:43.092649 kubelet[2574]: E0117 00:02:43.092620 2574 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:02:43.093374 kubelet[2574]: I0117 00:02:43.092980 2574 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:02:43.093374 kubelet[2574]: I0117 00:02:43.092998 2574 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:02:43.093374 kubelet[2574]: I0117 00:02:43.093278 2574 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:02:43.100498 kubelet[2574]: E0117 00:02:43.100460 2574 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:02:43.103417 kubelet[2574]: I0117 00:02:43.103296 2574 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-089d3b6582" Jan 17 00:02:43.105828 kubelet[2574]: I0117 00:02:43.105789 2574 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-089d3b6582" Jan 17 00:02:43.106737 kubelet[2574]: I0117 00:02:43.106708 2574 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-089d3b6582" Jan 17 00:02:43.118901 kubelet[2574]: E0117 00:02:43.118852 2574 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-089d3b6582\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-089d3b6582" Jan 17 00:02:43.207685 kubelet[2574]: I0117 00:02:43.207461 2574 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-089d3b6582" Jan 17 00:02:43.223591 kubelet[2574]: I0117 00:02:43.223133 2574 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-089d3b6582" Jan 17 00:02:43.223591 kubelet[2574]: I0117 00:02:43.223236 2574 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-089d3b6582" Jan 17 00:02:43.295555 kubelet[2574]: I0117 00:02:43.295461 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b857e3e114f9365d0857edd8c754623f-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-089d3b6582\" (UID: \"b857e3e114f9365d0857edd8c754623f\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-089d3b6582" Jan 17 00:02:43.295818 kubelet[2574]: I0117 00:02:43.295791 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b857e3e114f9365d0857edd8c754623f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-089d3b6582\" (UID: \"b857e3e114f9365d0857edd8c754623f\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-089d3b6582" Jan 17 00:02:43.295940 kubelet[2574]: I0117 00:02:43.295908 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0740d2521bb62474643d607c17501686-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-089d3b6582\" (UID: \"0740d2521bb62474643d607c17501686\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-089d3b6582" Jan 17 00:02:43.296173 kubelet[2574]: I0117 00:02:43.295983 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/014ed805460c88bb2dac956a887edff1-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-089d3b6582\" (UID: \"014ed805460c88bb2dac956a887edff1\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-089d3b6582" Jan 17 00:02:43.296173 kubelet[2574]: I0117 00:02:43.296011 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/014ed805460c88bb2dac956a887edff1-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-089d3b6582\" (UID: \"014ed805460c88bb2dac956a887edff1\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-089d3b6582" Jan 17 00:02:43.296173 kubelet[2574]: I0117 00:02:43.296038 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b857e3e114f9365d0857edd8c754623f-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-089d3b6582\" (UID: \"b857e3e114f9365d0857edd8c754623f\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-089d3b6582" Jan 17 00:02:43.296173 kubelet[2574]: I0117 00:02:43.296062 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b857e3e114f9365d0857edd8c754623f-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-089d3b6582\" (UID: \"b857e3e114f9365d0857edd8c754623f\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-089d3b6582" Jan 17 00:02:43.296173 kubelet[2574]: I0117 00:02:43.296088 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/014ed805460c88bb2dac956a887edff1-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-089d3b6582\" (UID: \"014ed805460c88bb2dac956a887edff1\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-089d3b6582" Jan 17 00:02:43.296438 kubelet[2574]: I0117 00:02:43.296379 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b857e3e114f9365d0857edd8c754623f-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-089d3b6582\" (UID: \"b857e3e114f9365d0857edd8c754623f\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-089d3b6582" Jan 17 00:02:43.954401 kubelet[2574]: I0117 00:02:43.954332 2574 apiserver.go:52] "Watching apiserver" Jan 17 00:02:43.994605 kubelet[2574]: I0117 00:02:43.994521 2574 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 00:02:44.045004 kubelet[2574]: I0117 00:02:44.044725 2574 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-089d3b6582" Jan 17 00:02:44.045458 kubelet[2574]: I0117 00:02:44.045438 2574 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-089d3b6582" Jan 17 00:02:44.061923 kubelet[2574]: E0117 00:02:44.061795 2574 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-089d3b6582\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-089d3b6582" Jan 17 00:02:44.063450 kubelet[2574]: E0117 00:02:44.063244 2574 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-089d3b6582\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-089d3b6582" Jan 17 00:02:44.083658 kubelet[2574]: I0117 00:02:44.083587 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-089d3b6582" podStartSLOduration=1.083525594 podStartE2EDuration="1.083525594s" podCreationTimestamp="2026-01-17 00:02:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:02:44.082797754 +0000 UTC m=+1.221605961" watchObservedRunningTime="2026-01-17 00:02:44.083525594 +0000 UTC m=+1.222333801" Jan 17 00:02:44.123319 kubelet[2574]: I0117 00:02:44.123233 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-089d3b6582" podStartSLOduration=3.123210834 podStartE2EDuration="3.123210834s" podCreationTimestamp="2026-01-17 00:02:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:02:44.101758034 +0000 UTC m=+1.240566241" watchObservedRunningTime="2026-01-17 00:02:44.123210834 +0000 UTC m=+1.262019041" Jan 17 00:02:44.123488 kubelet[2574]: I0117 00:02:44.123389 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-089d3b6582" podStartSLOduration=1.123384154 podStartE2EDuration="1.123384154s" podCreationTimestamp="2026-01-17 00:02:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:02:44.119360954 +0000 UTC m=+1.258169201" watchObservedRunningTime="2026-01-17 00:02:44.123384154 +0000 UTC m=+1.262192401" Jan 17 00:02:48.558602 systemd[1]: Created slice kubepods-besteffort-pod99dcb46f_4256_49a6_be46_eba2469963bb.slice - libcontainer container kubepods-besteffort-pod99dcb46f_4256_49a6_be46_eba2469963bb.slice. Jan 17 00:02:48.631983 kubelet[2574]: I0117 00:02:48.631752 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99dcb46f-4256-49a6-be46-eba2469963bb-lib-modules\") pod \"kube-proxy-kbpht\" (UID: \"99dcb46f-4256-49a6-be46-eba2469963bb\") " pod="kube-system/kube-proxy-kbpht" Jan 17 00:02:48.631983 kubelet[2574]: I0117 00:02:48.631813 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7mlv\" (UniqueName: \"kubernetes.io/projected/99dcb46f-4256-49a6-be46-eba2469963bb-kube-api-access-j7mlv\") pod \"kube-proxy-kbpht\" (UID: \"99dcb46f-4256-49a6-be46-eba2469963bb\") " pod="kube-system/kube-proxy-kbpht" Jan 17 00:02:48.631983 kubelet[2574]: I0117 00:02:48.631849 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/99dcb46f-4256-49a6-be46-eba2469963bb-kube-proxy\") pod \"kube-proxy-kbpht\" (UID: \"99dcb46f-4256-49a6-be46-eba2469963bb\") " pod="kube-system/kube-proxy-kbpht" Jan 17 00:02:48.631983 kubelet[2574]: I0117 00:02:48.631874 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99dcb46f-4256-49a6-be46-eba2469963bb-xtables-lock\") pod \"kube-proxy-kbpht\" (UID: \"99dcb46f-4256-49a6-be46-eba2469963bb\") " pod="kube-system/kube-proxy-kbpht" Jan 17 00:02:48.668040 kubelet[2574]: I0117 00:02:48.667984 2574 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:02:48.669287 containerd[1486]: time="2026-01-17T00:02:48.669192851Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:02:48.670217 kubelet[2574]: I0117 00:02:48.669626 2574 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:02:48.745458 kubelet[2574]: E0117 00:02:48.744385 2574 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 17 00:02:48.745458 kubelet[2574]: E0117 00:02:48.744422 2574 projected.go:196] Error preparing data for projected volume kube-api-access-j7mlv for pod kube-system/kube-proxy-kbpht: configmap "kube-root-ca.crt" not found Jan 17 00:02:48.745458 kubelet[2574]: E0117 00:02:48.744536 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99dcb46f-4256-49a6-be46-eba2469963bb-kube-api-access-j7mlv podName:99dcb46f-4256-49a6-be46-eba2469963bb nodeName:}" failed. No retries permitted until 2026-01-17 00:02:49.244474318 +0000 UTC m=+6.383282485 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-j7mlv" (UniqueName: "kubernetes.io/projected/99dcb46f-4256-49a6-be46-eba2469963bb-kube-api-access-j7mlv") pod "kube-proxy-kbpht" (UID: "99dcb46f-4256-49a6-be46-eba2469963bb") : configmap "kube-root-ca.crt" not found Jan 17 00:02:49.337418 kubelet[2574]: E0117 00:02:49.337372 2574 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 17 00:02:49.337418 kubelet[2574]: E0117 00:02:49.337414 2574 projected.go:196] Error preparing data for projected volume kube-api-access-j7mlv for pod kube-system/kube-proxy-kbpht: configmap "kube-root-ca.crt" not found Jan 17 00:02:49.337633 kubelet[2574]: E0117 00:02:49.337490 2574 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99dcb46f-4256-49a6-be46-eba2469963bb-kube-api-access-j7mlv podName:99dcb46f-4256-49a6-be46-eba2469963bb nodeName:}" failed. No retries permitted until 2026-01-17 00:02:50.337453425 +0000 UTC m=+7.476261672 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7mlv" (UniqueName: "kubernetes.io/projected/99dcb46f-4256-49a6-be46-eba2469963bb-kube-api-access-j7mlv") pod "kube-proxy-kbpht" (UID: "99dcb46f-4256-49a6-be46-eba2469963bb") : configmap "kube-root-ca.crt" not found Jan 17 00:02:49.785080 systemd[1]: Created slice kubepods-besteffort-pod1d474fb8_9655_45ba_8937_8f2bd2ad83ff.slice - libcontainer container kubepods-besteffort-pod1d474fb8_9655_45ba_8937_8f2bd2ad83ff.slice. Jan 17 00:02:49.842080 kubelet[2574]: I0117 00:02:49.841999 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dr6r\" (UniqueName: \"kubernetes.io/projected/1d474fb8-9655-45ba-8937-8f2bd2ad83ff-kube-api-access-8dr6r\") pod \"tigera-operator-65cdcdfd6d-k9jxb\" (UID: \"1d474fb8-9655-45ba-8937-8f2bd2ad83ff\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-k9jxb" Jan 17 00:02:49.843070 kubelet[2574]: I0117 00:02:49.842990 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1d474fb8-9655-45ba-8937-8f2bd2ad83ff-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-k9jxb\" (UID: \"1d474fb8-9655-45ba-8937-8f2bd2ad83ff\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-k9jxb" Jan 17 00:02:50.093860 containerd[1486]: time="2026-01-17T00:02:50.093774231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-k9jxb,Uid:1d474fb8-9655-45ba-8937-8f2bd2ad83ff,Namespace:tigera-operator,Attempt:0,}" Jan 17 00:02:50.127076 containerd[1486]: time="2026-01-17T00:02:50.126846895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:50.127076 containerd[1486]: time="2026-01-17T00:02:50.126990731Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:50.127076 containerd[1486]: time="2026-01-17T00:02:50.127025170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:50.127747 containerd[1486]: time="2026-01-17T00:02:50.127693868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:50.157885 systemd[1]: Started cri-containerd-2a2b402b74b9624ad9c7d74bdc8e02cb66b35a10107604b40cf0a9ae81312653.scope - libcontainer container 2a2b402b74b9624ad9c7d74bdc8e02cb66b35a10107604b40cf0a9ae81312653. Jan 17 00:02:50.192416 containerd[1486]: time="2026-01-17T00:02:50.192350325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-k9jxb,Uid:1d474fb8-9655-45ba-8937-8f2bd2ad83ff,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2a2b402b74b9624ad9c7d74bdc8e02cb66b35a10107604b40cf0a9ae81312653\"" Jan 17 00:02:50.194927 containerd[1486]: time="2026-01-17T00:02:50.194891603Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 17 00:02:50.371858 containerd[1486]: time="2026-01-17T00:02:50.371306453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kbpht,Uid:99dcb46f-4256-49a6-be46-eba2469963bb,Namespace:kube-system,Attempt:0,}" Jan 17 00:02:50.398556 containerd[1486]: time="2026-01-17T00:02:50.397954722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:50.398556 containerd[1486]: time="2026-01-17T00:02:50.398021360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:50.398556 containerd[1486]: time="2026-01-17T00:02:50.398037919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:50.398556 containerd[1486]: time="2026-01-17T00:02:50.398232793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:50.420800 systemd[1]: Started cri-containerd-3cf591905dacbf6490e6e7fea8b148e8e0af5f716d7ec38c1bc122c1c0dfd7d3.scope - libcontainer container 3cf591905dacbf6490e6e7fea8b148e8e0af5f716d7ec38c1bc122c1c0dfd7d3. Jan 17 00:02:50.452197 containerd[1486]: time="2026-01-17T00:02:50.452146312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kbpht,Uid:99dcb46f-4256-49a6-be46-eba2469963bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"3cf591905dacbf6490e6e7fea8b148e8e0af5f716d7ec38c1bc122c1c0dfd7d3\"" Jan 17 00:02:50.461054 containerd[1486]: time="2026-01-17T00:02:50.460555404Z" level=info msg="CreateContainer within sandbox \"3cf591905dacbf6490e6e7fea8b148e8e0af5f716d7ec38c1bc122c1c0dfd7d3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:02:50.476743 containerd[1486]: time="2026-01-17T00:02:50.476683769Z" level=info msg="CreateContainer within sandbox \"3cf591905dacbf6490e6e7fea8b148e8e0af5f716d7ec38c1bc122c1c0dfd7d3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ae0112e731bf626ae259d00b59095990b4006d86018dfead717dfba727ced258\"" Jan 17 00:02:50.479568 containerd[1486]: time="2026-01-17T00:02:50.478021487Z" level=info msg="StartContainer for \"ae0112e731bf626ae259d00b59095990b4006d86018dfead717dfba727ced258\"" Jan 17 00:02:50.504740 systemd[1]: Started cri-containerd-ae0112e731bf626ae259d00b59095990b4006d86018dfead717dfba727ced258.scope - libcontainer container ae0112e731bf626ae259d00b59095990b4006d86018dfead717dfba727ced258. Jan 17 00:02:50.536582 containerd[1486]: time="2026-01-17T00:02:50.536521779Z" level=info msg="StartContainer for \"ae0112e731bf626ae259d00b59095990b4006d86018dfead717dfba727ced258\" returns successfully" Jan 17 00:02:51.081177 kubelet[2574]: I0117 00:02:51.080390 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kbpht" podStartSLOduration=3.080372898 podStartE2EDuration="3.080372898s" podCreationTimestamp="2026-01-17 00:02:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:02:51.080337299 +0000 UTC m=+8.219145506" watchObservedRunningTime="2026-01-17 00:02:51.080372898 +0000 UTC m=+8.219181065" Jan 17 00:02:52.564861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1648496574.mount: Deactivated successfully. Jan 17 00:02:53.034097 containerd[1486]: time="2026-01-17T00:02:53.034036948Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:53.035880 containerd[1486]: time="2026-01-17T00:02:53.035353034Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Jan 17 00:02:53.037316 containerd[1486]: time="2026-01-17T00:02:53.037256144Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:53.042912 containerd[1486]: time="2026-01-17T00:02:53.042866676Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:53.043906 containerd[1486]: time="2026-01-17T00:02:53.043851690Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.84885897s" Jan 17 00:02:53.044123 containerd[1486]: time="2026-01-17T00:02:53.044025126Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 17 00:02:53.051960 containerd[1486]: time="2026-01-17T00:02:53.051911998Z" level=info msg="CreateContainer within sandbox \"2a2b402b74b9624ad9c7d74bdc8e02cb66b35a10107604b40cf0a9ae81312653\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 00:02:53.067734 containerd[1486]: time="2026-01-17T00:02:53.067653824Z" level=info msg="CreateContainer within sandbox \"2a2b402b74b9624ad9c7d74bdc8e02cb66b35a10107604b40cf0a9ae81312653\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d759a612e2f170ed855fe6bc51fc31a9c117177508eb348cedb91ff1a9c97fd6\"" Jan 17 00:02:53.068661 containerd[1486]: time="2026-01-17T00:02:53.068536361Z" level=info msg="StartContainer for \"d759a612e2f170ed855fe6bc51fc31a9c117177508eb348cedb91ff1a9c97fd6\"" Jan 17 00:02:53.095974 systemd[1]: Started cri-containerd-d759a612e2f170ed855fe6bc51fc31a9c117177508eb348cedb91ff1a9c97fd6.scope - libcontainer container d759a612e2f170ed855fe6bc51fc31a9c117177508eb348cedb91ff1a9c97fd6. Jan 17 00:02:53.136222 containerd[1486]: time="2026-01-17T00:02:53.135917549Z" level=info msg="StartContainer for \"d759a612e2f170ed855fe6bc51fc31a9c117177508eb348cedb91ff1a9c97fd6\" returns successfully" Jan 17 00:02:54.099234 kubelet[2574]: I0117 00:02:54.099124 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-k9jxb" podStartSLOduration=2.248050351 podStartE2EDuration="5.09910666s" podCreationTimestamp="2026-01-17 00:02:49 +0000 UTC" firstStartedPulling="2026-01-17 00:02:50.19437006 +0000 UTC m=+7.333178227" lastFinishedPulling="2026-01-17 00:02:53.045426289 +0000 UTC m=+10.184234536" observedRunningTime="2026-01-17 00:02:54.098437436 +0000 UTC m=+11.237245643" watchObservedRunningTime="2026-01-17 00:02:54.09910666 +0000 UTC m=+11.237914827" Jan 17 00:02:59.464195 sudo[1706]: pam_unix(sudo:session): session closed for user root Jan 17 00:02:59.568051 sshd[1703]: pam_unix(sshd:session): session closed for user core Jan 17 00:02:59.573245 systemd-logind[1463]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:02:59.574354 systemd[1]: sshd@6-167.235.246.183:22-4.153.228.146:48820.service: Deactivated successfully. Jan 17 00:02:59.578318 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:02:59.578657 systemd[1]: session-7.scope: Consumed 7.760s CPU time, 152.5M memory peak, 0B memory swap peak. Jan 17 00:02:59.580404 systemd-logind[1463]: Removed session 7. Jan 17 00:03:00.468651 systemd[1]: Started sshd@7-167.235.246.183:22-85.217.149.12:34754.service - OpenSSH per-connection server daemon (85.217.149.12:34754). Jan 17 00:03:00.766481 sshd[2973]: Connection closed by 85.217.149.12 port 34754 [preauth] Jan 17 00:03:00.769159 systemd[1]: sshd@7-167.235.246.183:22-85.217.149.12:34754.service: Deactivated successfully. Jan 17 00:03:09.679789 kubelet[2574]: E0117 00:03:09.679723 2574 reflector.go:205] "Failed to watch" err="failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ci-4081-3-6-n-089d3b6582\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-6-n-089d3b6582' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"typha-certs\"" type="*v1.Secret" Jan 17 00:03:09.686862 systemd[1]: Created slice kubepods-besteffort-pod660cfa0a_971f_41cb_a500_86ee88b9dcbd.slice - libcontainer container kubepods-besteffort-pod660cfa0a_971f_41cb_a500_86ee88b9dcbd.slice. Jan 17 00:03:09.774591 kubelet[2574]: I0117 00:03:09.774397 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/660cfa0a-971f-41cb-a500-86ee88b9dcbd-tigera-ca-bundle\") pod \"calico-typha-87d59cc9b-7955p\" (UID: \"660cfa0a-971f-41cb-a500-86ee88b9dcbd\") " pod="calico-system/calico-typha-87d59cc9b-7955p" Jan 17 00:03:09.774591 kubelet[2574]: I0117 00:03:09.774463 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzg2b\" (UniqueName: \"kubernetes.io/projected/660cfa0a-971f-41cb-a500-86ee88b9dcbd-kube-api-access-mzg2b\") pod \"calico-typha-87d59cc9b-7955p\" (UID: \"660cfa0a-971f-41cb-a500-86ee88b9dcbd\") " pod="calico-system/calico-typha-87d59cc9b-7955p" Jan 17 00:03:09.774591 kubelet[2574]: I0117 00:03:09.774481 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/660cfa0a-971f-41cb-a500-86ee88b9dcbd-typha-certs\") pod \"calico-typha-87d59cc9b-7955p\" (UID: \"660cfa0a-971f-41cb-a500-86ee88b9dcbd\") " pod="calico-system/calico-typha-87d59cc9b-7955p" Jan 17 00:03:09.816727 systemd[1]: Created slice kubepods-besteffort-poda9cf2c71_bafe_460f_b153_71b254ae09ff.slice - libcontainer container kubepods-besteffort-poda9cf2c71_bafe_460f_b153_71b254ae09ff.slice. Jan 17 00:03:09.875777 kubelet[2574]: I0117 00:03:09.875716 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a9cf2c71-bafe-460f-b153-71b254ae09ff-node-certs\") pod \"calico-node-9ncfr\" (UID: \"a9cf2c71-bafe-460f-b153-71b254ae09ff\") " pod="calico-system/calico-node-9ncfr" Jan 17 00:03:09.876431 kubelet[2574]: I0117 00:03:09.876371 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a9cf2c71-bafe-460f-b153-71b254ae09ff-xtables-lock\") pod \"calico-node-9ncfr\" (UID: \"a9cf2c71-bafe-460f-b153-71b254ae09ff\") " pod="calico-system/calico-node-9ncfr" Jan 17 00:03:09.879534 kubelet[2574]: I0117 00:03:09.876675 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a9cf2c71-bafe-460f-b153-71b254ae09ff-flexvol-driver-host\") pod \"calico-node-9ncfr\" (UID: \"a9cf2c71-bafe-460f-b153-71b254ae09ff\") " pod="calico-system/calico-node-9ncfr" Jan 17 00:03:09.879534 kubelet[2574]: I0117 00:03:09.876731 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a9cf2c71-bafe-460f-b153-71b254ae09ff-var-lib-calico\") pod \"calico-node-9ncfr\" (UID: \"a9cf2c71-bafe-460f-b153-71b254ae09ff\") " pod="calico-system/calico-node-9ncfr" Jan 17 00:03:09.879534 kubelet[2574]: I0117 00:03:09.876815 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9cf2c71-bafe-460f-b153-71b254ae09ff-lib-modules\") pod \"calico-node-9ncfr\" (UID: \"a9cf2c71-bafe-460f-b153-71b254ae09ff\") " pod="calico-system/calico-node-9ncfr" Jan 17 00:03:09.879534 kubelet[2574]: I0117 00:03:09.876878 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a9cf2c71-bafe-460f-b153-71b254ae09ff-cni-bin-dir\") pod \"calico-node-9ncfr\" (UID: \"a9cf2c71-bafe-460f-b153-71b254ae09ff\") " pod="calico-system/calico-node-9ncfr" Jan 17 00:03:09.879534 kubelet[2574]: I0117 00:03:09.876917 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a9cf2c71-bafe-460f-b153-71b254ae09ff-policysync\") pod \"calico-node-9ncfr\" (UID: \"a9cf2c71-bafe-460f-b153-71b254ae09ff\") " pod="calico-system/calico-node-9ncfr" Jan 17 00:03:09.879860 kubelet[2574]: I0117 00:03:09.876952 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9cf2c71-bafe-460f-b153-71b254ae09ff-tigera-ca-bundle\") pod \"calico-node-9ncfr\" (UID: \"a9cf2c71-bafe-460f-b153-71b254ae09ff\") " pod="calico-system/calico-node-9ncfr" Jan 17 00:03:09.879860 kubelet[2574]: I0117 00:03:09.876994 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmsq2\" (UniqueName: \"kubernetes.io/projected/a9cf2c71-bafe-460f-b153-71b254ae09ff-kube-api-access-rmsq2\") pod \"calico-node-9ncfr\" (UID: \"a9cf2c71-bafe-460f-b153-71b254ae09ff\") " pod="calico-system/calico-node-9ncfr" Jan 17 00:03:09.879860 kubelet[2574]: I0117 00:03:09.877031 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a9cf2c71-bafe-460f-b153-71b254ae09ff-cni-log-dir\") pod \"calico-node-9ncfr\" (UID: \"a9cf2c71-bafe-460f-b153-71b254ae09ff\") " pod="calico-system/calico-node-9ncfr" Jan 17 00:03:09.879860 kubelet[2574]: I0117 00:03:09.877089 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a9cf2c71-bafe-460f-b153-71b254ae09ff-cni-net-dir\") pod \"calico-node-9ncfr\" (UID: \"a9cf2c71-bafe-460f-b153-71b254ae09ff\") " pod="calico-system/calico-node-9ncfr" Jan 17 00:03:09.879860 kubelet[2574]: I0117 00:03:09.877130 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a9cf2c71-bafe-460f-b153-71b254ae09ff-var-run-calico\") pod \"calico-node-9ncfr\" (UID: \"a9cf2c71-bafe-460f-b153-71b254ae09ff\") " pod="calico-system/calico-node-9ncfr" Jan 17 00:03:09.936648 kubelet[2574]: E0117 00:03:09.934269 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rctkw" podUID="e730921e-fe6a-4325-b721-055844e798ac" Jan 17 00:03:09.940369 kubelet[2574]: E0117 00:03:09.940310 2574 status_manager.go:1018] "Failed to get status for pod" err="pods \"csi-node-driver-rctkw\" is forbidden: User \"system:node:ci-4081-3-6-n-089d3b6582\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-6-n-089d3b6582' and this object" podUID="e730921e-fe6a-4325-b721-055844e798ac" pod="calico-system/csi-node-driver-rctkw" Jan 17 00:03:09.980274 kubelet[2574]: E0117 00:03:09.979982 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.980274 kubelet[2574]: W0117 00:03:09.980012 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:09.980274 kubelet[2574]: E0117 00:03:09.980036 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:09.981363 kubelet[2574]: E0117 00:03:09.980821 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.981363 kubelet[2574]: W0117 00:03:09.980843 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:09.981363 kubelet[2574]: E0117 00:03:09.980864 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:09.981978 kubelet[2574]: E0117 00:03:09.981827 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.981978 kubelet[2574]: W0117 00:03:09.981843 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:09.981978 kubelet[2574]: E0117 00:03:09.981858 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:09.982839 kubelet[2574]: E0117 00:03:09.982586 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.982839 kubelet[2574]: W0117 00:03:09.982601 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:09.982839 kubelet[2574]: E0117 00:03:09.982615 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:09.983384 kubelet[2574]: E0117 00:03:09.983264 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.983384 kubelet[2574]: W0117 00:03:09.983280 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:09.983384 kubelet[2574]: E0117 00:03:09.983295 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:09.983384 kubelet[2574]: I0117 00:03:09.983332 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e730921e-fe6a-4325-b721-055844e798ac-registration-dir\") pod \"csi-node-driver-rctkw\" (UID: \"e730921e-fe6a-4325-b721-055844e798ac\") " pod="calico-system/csi-node-driver-rctkw" Jan 17 00:03:09.984137 kubelet[2574]: E0117 00:03:09.983960 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.984137 kubelet[2574]: W0117 00:03:09.983980 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:09.984137 kubelet[2574]: E0117 00:03:09.983992 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:09.984409 kubelet[2574]: I0117 00:03:09.984285 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e730921e-fe6a-4325-b721-055844e798ac-socket-dir\") pod \"csi-node-driver-rctkw\" (UID: \"e730921e-fe6a-4325-b721-055844e798ac\") " pod="calico-system/csi-node-driver-rctkw" Jan 17 00:03:09.985046 kubelet[2574]: E0117 00:03:09.984954 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.985046 kubelet[2574]: W0117 00:03:09.984970 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:09.985046 kubelet[2574]: E0117 00:03:09.984985 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:09.985046 kubelet[2574]: I0117 00:03:09.985010 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e730921e-fe6a-4325-b721-055844e798ac-kubelet-dir\") pod \"csi-node-driver-rctkw\" (UID: \"e730921e-fe6a-4325-b721-055844e798ac\") " pod="calico-system/csi-node-driver-rctkw" Jan 17 00:03:09.985344 kubelet[2574]: E0117 00:03:09.985322 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.985344 kubelet[2574]: W0117 00:03:09.985340 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:09.985451 kubelet[2574]: E0117 00:03:09.985356 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:09.986195 kubelet[2574]: E0117 00:03:09.986169 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.986195 kubelet[2574]: W0117 00:03:09.986191 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:09.986340 kubelet[2574]: E0117 00:03:09.986211 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:09.986552 kubelet[2574]: E0117 00:03:09.986424 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.986552 kubelet[2574]: W0117 00:03:09.986436 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:09.986552 kubelet[2574]: E0117 00:03:09.986445 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:09.986979 kubelet[2574]: E0117 00:03:09.986958 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.986979 kubelet[2574]: W0117 00:03:09.986975 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:09.987195 kubelet[2574]: E0117 00:03:09.986989 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:09.987195 kubelet[2574]: I0117 00:03:09.987100 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgc92\" (UniqueName: \"kubernetes.io/projected/e730921e-fe6a-4325-b721-055844e798ac-kube-api-access-tgc92\") pod \"csi-node-driver-rctkw\" (UID: \"e730921e-fe6a-4325-b721-055844e798ac\") " pod="calico-system/csi-node-driver-rctkw" Jan 17 00:03:09.987561 kubelet[2574]: E0117 00:03:09.987502 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.987614 kubelet[2574]: W0117 00:03:09.987561 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:09.987614 kubelet[2574]: E0117 00:03:09.987579 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:09.988691 kubelet[2574]: E0117 00:03:09.987790 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.988691 kubelet[2574]: W0117 00:03:09.987801 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:09.988691 kubelet[2574]: E0117 00:03:09.987811 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:09.988691 kubelet[2574]: E0117 00:03:09.988003 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.988691 kubelet[2574]: W0117 00:03:09.988013 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:09.988691 kubelet[2574]: E0117 00:03:09.988023 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:09.988819 kubelet[2574]: E0117 00:03:09.988790 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.988819 kubelet[2574]: W0117 00:03:09.988804 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:09.988899 kubelet[2574]: E0117 00:03:09.988816 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:09.989032 kubelet[2574]: E0117 00:03:09.989021 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.989032 kubelet[2574]: W0117 00:03:09.989030 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:09.989171 kubelet[2574]: E0117 00:03:09.989040 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:09.989300 kubelet[2574]: E0117 00:03:09.989278 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.989300 kubelet[2574]: W0117 00:03:09.989291 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:09.989392 kubelet[2574]: E0117 00:03:09.989302 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:09.989597 kubelet[2574]: E0117 00:03:09.989580 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.989597 kubelet[2574]: W0117 00:03:09.989594 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:09.989723 kubelet[2574]: E0117 00:03:09.989605 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:09.989995 kubelet[2574]: E0117 00:03:09.989977 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.989995 kubelet[2574]: W0117 00:03:09.989991 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:09.990096 kubelet[2574]: E0117 00:03:09.990002 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:09.990442 kubelet[2574]: E0117 00:03:09.990423 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.990442 kubelet[2574]: W0117 00:03:09.990438 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:09.990568 kubelet[2574]: E0117 00:03:09.990455 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:09.990888 kubelet[2574]: E0117 00:03:09.990868 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.990888 kubelet[2574]: W0117 00:03:09.990884 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:09.990974 kubelet[2574]: E0117 00:03:09.990895 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:09.991604 kubelet[2574]: E0117 00:03:09.991586 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.991604 kubelet[2574]: W0117 00:03:09.991601 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:09.991681 kubelet[2574]: E0117 00:03:09.991612 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:09.992198 kubelet[2574]: E0117 00:03:09.992177 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.992198 kubelet[2574]: W0117 00:03:09.992192 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:09.992395 kubelet[2574]: E0117 00:03:09.992204 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:09.992495 kubelet[2574]: E0117 00:03:09.992478 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.992495 kubelet[2574]: W0117 00:03:09.992495 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:09.992777 kubelet[2574]: E0117 00:03:09.992519 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:09.992777 kubelet[2574]: E0117 00:03:09.992734 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.992777 kubelet[2574]: W0117 00:03:09.992743 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:09.992777 kubelet[2574]: E0117 00:03:09.992752 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:09.993040 kubelet[2574]: E0117 00:03:09.992963 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.993040 kubelet[2574]: W0117 00:03:09.992974 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:09.993040 kubelet[2574]: E0117 00:03:09.992983 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:09.993327 kubelet[2574]: E0117 00:03:09.993185 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.993327 kubelet[2574]: W0117 00:03:09.993199 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:09.993327 kubelet[2574]: E0117 00:03:09.993209 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:09.993625 kubelet[2574]: E0117 00:03:09.993606 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.993625 kubelet[2574]: W0117 00:03:09.993622 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:09.993836 kubelet[2574]: E0117 00:03:09.993641 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:09.994702 kubelet[2574]: E0117 00:03:09.994678 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.994702 kubelet[2574]: W0117 00:03:09.994696 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:09.994830 kubelet[2574]: E0117 00:03:09.994710 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:09.996784 kubelet[2574]: E0117 00:03:09.996758 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.996784 kubelet[2574]: W0117 00:03:09.996778 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:09.996961 kubelet[2574]: E0117 00:03:09.996795 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:09.997181 kubelet[2574]: E0117 00:03:09.997163 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.997181 kubelet[2574]: W0117 00:03:09.997178 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:09.997256 kubelet[2574]: E0117 00:03:09.997192 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:09.997427 kubelet[2574]: E0117 00:03:09.997413 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.997427 kubelet[2574]: W0117 00:03:09.997425 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:09.997578 kubelet[2574]: E0117 00:03:09.997436 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:09.997790 kubelet[2574]: E0117 00:03:09.997774 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.997790 kubelet[2574]: W0117 00:03:09.997787 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:09.997897 kubelet[2574]: E0117 00:03:09.997799 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:09.998017 kubelet[2574]: E0117 00:03:09.997982 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.998017 kubelet[2574]: W0117 00:03:09.997993 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:09.998017 kubelet[2574]: E0117 00:03:09.998003 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:09.999911 kubelet[2574]: E0117 00:03:09.999888 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:09.999911 kubelet[2574]: W0117 00:03:09.999909 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.000091 kubelet[2574]: E0117 00:03:09.999925 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.001332 kubelet[2574]: E0117 00:03:10.001296 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.001332 kubelet[2574]: W0117 00:03:10.001325 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.001454 kubelet[2574]: E0117 00:03:10.001346 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.002821 kubelet[2574]: E0117 00:03:10.002673 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.002821 kubelet[2574]: W0117 00:03:10.002696 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.002821 kubelet[2574]: E0117 00:03:10.002713 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.003461 kubelet[2574]: E0117 00:03:10.002953 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.003461 kubelet[2574]: W0117 00:03:10.002968 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.003461 kubelet[2574]: E0117 00:03:10.002979 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.005920 kubelet[2574]: E0117 00:03:10.005849 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.005920 kubelet[2574]: W0117 00:03:10.005871 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.005920 kubelet[2574]: E0117 00:03:10.005894 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.006589 kubelet[2574]: E0117 00:03:10.006562 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.006589 kubelet[2574]: W0117 00:03:10.006583 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.006690 kubelet[2574]: E0117 00:03:10.006601 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.007014 kubelet[2574]: E0117 00:03:10.006992 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.007089 kubelet[2574]: W0117 00:03:10.007020 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.007089 kubelet[2574]: E0117 00:03:10.007033 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.007412 kubelet[2574]: E0117 00:03:10.007390 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.007412 kubelet[2574]: W0117 00:03:10.007408 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.007489 kubelet[2574]: E0117 00:03:10.007420 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.007825 kubelet[2574]: E0117 00:03:10.007627 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.007825 kubelet[2574]: W0117 00:03:10.007655 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.007825 kubelet[2574]: E0117 00:03:10.007667 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.007927 kubelet[2574]: E0117 00:03:10.007849 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.007927 kubelet[2574]: W0117 00:03:10.007857 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.007927 kubelet[2574]: E0117 00:03:10.007867 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.008278 kubelet[2574]: E0117 00:03:10.008035 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.008278 kubelet[2574]: W0117 00:03:10.008118 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.008278 kubelet[2574]: E0117 00:03:10.008146 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.008278 kubelet[2574]: I0117 00:03:10.008237 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e730921e-fe6a-4325-b721-055844e798ac-varrun\") pod \"csi-node-driver-rctkw\" (UID: \"e730921e-fe6a-4325-b721-055844e798ac\") " pod="calico-system/csi-node-driver-rctkw" Jan 17 00:03:10.008429 kubelet[2574]: E0117 00:03:10.008412 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.008429 kubelet[2574]: W0117 00:03:10.008426 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.008497 kubelet[2574]: E0117 00:03:10.008438 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.009348 kubelet[2574]: E0117 00:03:10.009313 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.009461 kubelet[2574]: W0117 00:03:10.009341 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.009461 kubelet[2574]: E0117 00:03:10.009376 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.009709 kubelet[2574]: E0117 00:03:10.009693 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.009709 kubelet[2574]: W0117 00:03:10.009707 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.009780 kubelet[2574]: E0117 00:03:10.009718 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.010033 kubelet[2574]: E0117 00:03:10.010014 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.010033 kubelet[2574]: W0117 00:03:10.010030 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.010244 kubelet[2574]: E0117 00:03:10.010042 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.011019 kubelet[2574]: E0117 00:03:10.010996 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.011019 kubelet[2574]: W0117 00:03:10.011014 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.011378 kubelet[2574]: E0117 00:03:10.011028 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.011871 kubelet[2574]: E0117 00:03:10.011845 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.011871 kubelet[2574]: W0117 00:03:10.011867 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.012933 kubelet[2574]: E0117 00:03:10.011884 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.012933 kubelet[2574]: E0117 00:03:10.012656 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.012933 kubelet[2574]: W0117 00:03:10.012670 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.012933 kubelet[2574]: E0117 00:03:10.012833 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.015629 kubelet[2574]: E0117 00:03:10.015597 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.015629 kubelet[2574]: W0117 00:03:10.015623 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.015812 kubelet[2574]: E0117 00:03:10.015644 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.015915 kubelet[2574]: E0117 00:03:10.015899 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.015915 kubelet[2574]: W0117 00:03:10.015911 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.015977 kubelet[2574]: E0117 00:03:10.015921 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.016631 kubelet[2574]: E0117 00:03:10.016607 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.016631 kubelet[2574]: W0117 00:03:10.016625 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.016833 kubelet[2574]: E0117 00:03:10.016639 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.017489 kubelet[2574]: E0117 00:03:10.017464 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.017713 kubelet[2574]: W0117 00:03:10.017555 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.017713 kubelet[2574]: E0117 00:03:10.017587 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.018278 kubelet[2574]: E0117 00:03:10.018185 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.018278 kubelet[2574]: W0117 00:03:10.018203 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.018278 kubelet[2574]: E0117 00:03:10.018218 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.018800 kubelet[2574]: E0117 00:03:10.018688 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.018800 kubelet[2574]: W0117 00:03:10.018702 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.018800 kubelet[2574]: E0117 00:03:10.018715 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.019094 kubelet[2574]: E0117 00:03:10.019078 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.019387 kubelet[2574]: W0117 00:03:10.019225 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.019387 kubelet[2574]: E0117 00:03:10.019251 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.020654 kubelet[2574]: E0117 00:03:10.020564 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.020654 kubelet[2574]: W0117 00:03:10.020582 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.020654 kubelet[2574]: E0117 00:03:10.020596 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.021053 kubelet[2574]: E0117 00:03:10.020943 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.021053 kubelet[2574]: W0117 00:03:10.020955 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.021053 kubelet[2574]: E0117 00:03:10.020966 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.021435 kubelet[2574]: E0117 00:03:10.021350 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.021435 kubelet[2574]: W0117 00:03:10.021364 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.021435 kubelet[2574]: E0117 00:03:10.021377 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.021771 kubelet[2574]: E0117 00:03:10.021758 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.021899 kubelet[2574]: W0117 00:03:10.021820 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.021899 kubelet[2574]: E0117 00:03:10.021841 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.022205 kubelet[2574]: E0117 00:03:10.022188 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.022372 kubelet[2574]: W0117 00:03:10.022263 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.022372 kubelet[2574]: E0117 00:03:10.022280 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.022659 kubelet[2574]: E0117 00:03:10.022646 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.022780 kubelet[2574]: W0117 00:03:10.022713 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.022780 kubelet[2574]: E0117 00:03:10.022730 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.023137 kubelet[2574]: E0117 00:03:10.023008 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.023137 kubelet[2574]: W0117 00:03:10.023020 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.023137 kubelet[2574]: E0117 00:03:10.023030 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.023466 kubelet[2574]: E0117 00:03:10.023395 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.023466 kubelet[2574]: W0117 00:03:10.023406 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.023466 kubelet[2574]: E0117 00:03:10.023417 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.024606 kubelet[2574]: E0117 00:03:10.024568 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.024606 kubelet[2574]: W0117 00:03:10.024585 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.024606 kubelet[2574]: E0117 00:03:10.024600 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.024887 kubelet[2574]: E0117 00:03:10.024877 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.024931 kubelet[2574]: W0117 00:03:10.024888 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.024931 kubelet[2574]: E0117 00:03:10.024898 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.025687 kubelet[2574]: E0117 00:03:10.025654 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.025687 kubelet[2574]: W0117 00:03:10.025683 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.025777 kubelet[2574]: E0117 00:03:10.025700 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.032005 kubelet[2574]: E0117 00:03:10.031976 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.032230 kubelet[2574]: W0117 00:03:10.032208 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.032319 kubelet[2574]: E0117 00:03:10.032305 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.053494 kubelet[2574]: E0117 00:03:10.053431 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.053794 kubelet[2574]: W0117 00:03:10.053716 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.053794 kubelet[2574]: E0117 00:03:10.053751 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.121849 kubelet[2574]: E0117 00:03:10.121816 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.122196 kubelet[2574]: W0117 00:03:10.122017 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.122196 kubelet[2574]: E0117 00:03:10.122047 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.123114 kubelet[2574]: E0117 00:03:10.122637 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.123114 kubelet[2574]: W0117 00:03:10.122653 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.123114 kubelet[2574]: E0117 00:03:10.122669 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.124251 kubelet[2574]: E0117 00:03:10.124233 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.124530 kubelet[2574]: W0117 00:03:10.124413 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.124530 kubelet[2574]: E0117 00:03:10.124437 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.124912 kubelet[2574]: E0117 00:03:10.124896 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.125126 kubelet[2574]: W0117 00:03:10.124984 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.125126 kubelet[2574]: E0117 00:03:10.125002 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.125266 kubelet[2574]: E0117 00:03:10.125254 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.125330 kubelet[2574]: W0117 00:03:10.125319 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.125536 kubelet[2574]: E0117 00:03:10.125375 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.125961 kubelet[2574]: E0117 00:03:10.125946 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.126030 kubelet[2574]: W0117 00:03:10.126020 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.126105 kubelet[2574]: E0117 00:03:10.126091 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.126503 kubelet[2574]: E0117 00:03:10.126398 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.126503 kubelet[2574]: W0117 00:03:10.126411 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.126503 kubelet[2574]: E0117 00:03:10.126422 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.126727 kubelet[2574]: E0117 00:03:10.126716 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.126777 kubelet[2574]: W0117 00:03:10.126767 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.127071 kubelet[2574]: E0117 00:03:10.126823 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.127191 kubelet[2574]: E0117 00:03:10.127178 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.127253 kubelet[2574]: W0117 00:03:10.127242 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.127313 kubelet[2574]: E0117 00:03:10.127300 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.127615 kubelet[2574]: E0117 00:03:10.127596 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.128277 kubelet[2574]: W0117 00:03:10.128258 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.128360 kubelet[2574]: E0117 00:03:10.128349 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.128391 containerd[1486]: time="2026-01-17T00:03:10.128355149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9ncfr,Uid:a9cf2c71-bafe-460f-b153-71b254ae09ff,Namespace:calico-system,Attempt:0,}" Jan 17 00:03:10.129210 kubelet[2574]: E0117 00:03:10.129006 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.129210 kubelet[2574]: W0117 00:03:10.129022 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.129210 kubelet[2574]: E0117 00:03:10.129035 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.129490 kubelet[2574]: E0117 00:03:10.129361 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.129490 kubelet[2574]: W0117 00:03:10.129374 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.129490 kubelet[2574]: E0117 00:03:10.129385 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.129920 kubelet[2574]: E0117 00:03:10.129907 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.130480 kubelet[2574]: W0117 00:03:10.130074 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.130480 kubelet[2574]: E0117 00:03:10.130095 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.130816 kubelet[2574]: E0117 00:03:10.130786 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.130816 kubelet[2574]: W0117 00:03:10.130808 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.130882 kubelet[2574]: E0117 00:03:10.130838 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.131603 kubelet[2574]: E0117 00:03:10.131184 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.131603 kubelet[2574]: W0117 00:03:10.131206 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.131603 kubelet[2574]: E0117 00:03:10.131219 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.131603 kubelet[2574]: E0117 00:03:10.131498 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.131957 kubelet[2574]: W0117 00:03:10.131552 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.132007 kubelet[2574]: E0117 00:03:10.131960 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.132279 kubelet[2574]: E0117 00:03:10.132230 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.132279 kubelet[2574]: W0117 00:03:10.132275 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.132369 kubelet[2574]: E0117 00:03:10.132288 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.132720 kubelet[2574]: E0117 00:03:10.132700 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.132768 kubelet[2574]: W0117 00:03:10.132716 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.132768 kubelet[2574]: E0117 00:03:10.132761 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.133766 kubelet[2574]: E0117 00:03:10.133743 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.133851 kubelet[2574]: W0117 00:03:10.133781 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.133851 kubelet[2574]: E0117 00:03:10.133799 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.134076 kubelet[2574]: E0117 00:03:10.134052 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.134124 kubelet[2574]: W0117 00:03:10.134077 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.134124 kubelet[2574]: E0117 00:03:10.134094 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.134515 kubelet[2574]: E0117 00:03:10.134495 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.134829 kubelet[2574]: W0117 00:03:10.134537 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.134829 kubelet[2574]: E0117 00:03:10.134551 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.135174 kubelet[2574]: E0117 00:03:10.135157 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.135320 kubelet[2574]: W0117 00:03:10.135241 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.135320 kubelet[2574]: E0117 00:03:10.135261 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.135929 kubelet[2574]: E0117 00:03:10.135855 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.135929 kubelet[2574]: W0117 00:03:10.135870 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.135929 kubelet[2574]: E0117 00:03:10.135883 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.136364 kubelet[2574]: E0117 00:03:10.136346 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.136364 kubelet[2574]: W0117 00:03:10.136361 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.136550 kubelet[2574]: E0117 00:03:10.136448 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.137677 kubelet[2574]: E0117 00:03:10.137651 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.137757 kubelet[2574]: W0117 00:03:10.137673 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.137757 kubelet[2574]: E0117 00:03:10.137705 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.163540 kubelet[2574]: E0117 00:03:10.162935 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.163540 kubelet[2574]: W0117 00:03:10.162975 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.163540 kubelet[2574]: E0117 00:03:10.163001 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.172976 containerd[1486]: time="2026-01-17T00:03:10.172753040Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:03:10.172976 containerd[1486]: time="2026-01-17T00:03:10.172818319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:03:10.172976 containerd[1486]: time="2026-01-17T00:03:10.172830039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:10.173245 containerd[1486]: time="2026-01-17T00:03:10.172931278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:10.192218 systemd[1]: Started cri-containerd-b070fbe191940350386858aa60021d2af9a11bdd30a05ac21b380fb7f67b8714.scope - libcontainer container b070fbe191940350386858aa60021d2af9a11bdd30a05ac21b380fb7f67b8714. Jan 17 00:03:10.228240 containerd[1486]: time="2026-01-17T00:03:10.228177713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9ncfr,Uid:a9cf2c71-bafe-460f-b153-71b254ae09ff,Namespace:calico-system,Attempt:0,} returns sandbox id \"b070fbe191940350386858aa60021d2af9a11bdd30a05ac21b380fb7f67b8714\"" Jan 17 00:03:10.230259 containerd[1486]: time="2026-01-17T00:03:10.230132296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 17 00:03:10.771156 kubelet[2574]: E0117 00:03:10.771124 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:03:10.771156 kubelet[2574]: W0117 00:03:10.771149 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:03:10.771598 kubelet[2574]: E0117 00:03:10.771172 2574 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:03:10.897793 containerd[1486]: time="2026-01-17T00:03:10.897738915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-87d59cc9b-7955p,Uid:660cfa0a-971f-41cb-a500-86ee88b9dcbd,Namespace:calico-system,Attempt:0,}" Jan 17 00:03:10.926467 containerd[1486]: time="2026-01-17T00:03:10.926353024Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:03:10.926467 containerd[1486]: time="2026-01-17T00:03:10.926414623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:03:10.926467 containerd[1486]: time="2026-01-17T00:03:10.926426543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:10.926741 containerd[1486]: time="2026-01-17T00:03:10.926557142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:10.951736 systemd[1]: Started cri-containerd-6fd796799410708a95f90db5d90dc99fde09535c0f807e1934a6f759452891f3.scope - libcontainer container 6fd796799410708a95f90db5d90dc99fde09535c0f807e1934a6f759452891f3. Jan 17 00:03:10.987801 containerd[1486]: time="2026-01-17T00:03:10.987746725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-87d59cc9b-7955p,Uid:660cfa0a-971f-41cb-a500-86ee88b9dcbd,Namespace:calico-system,Attempt:0,} returns sandbox id \"6fd796799410708a95f90db5d90dc99fde09535c0f807e1934a6f759452891f3\"" Jan 17 00:03:11.856438 containerd[1486]: time="2026-01-17T00:03:11.856358649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:11.857903 containerd[1486]: time="2026-01-17T00:03:11.857850596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5636570" Jan 17 00:03:11.859215 containerd[1486]: time="2026-01-17T00:03:11.858793989Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:11.862243 containerd[1486]: time="2026-01-17T00:03:11.862194761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:11.863841 containerd[1486]: time="2026-01-17T00:03:11.863760468Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.633576812s" Jan 17 00:03:11.863841 containerd[1486]: time="2026-01-17T00:03:11.863831787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 17 00:03:11.866573 containerd[1486]: time="2026-01-17T00:03:11.866164568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 17 00:03:11.870816 containerd[1486]: time="2026-01-17T00:03:11.870778450Z" level=info msg="CreateContainer within sandbox \"b070fbe191940350386858aa60021d2af9a11bdd30a05ac21b380fb7f67b8714\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 00:03:11.889294 containerd[1486]: time="2026-01-17T00:03:11.889198898Z" level=info msg="CreateContainer within sandbox \"b070fbe191940350386858aa60021d2af9a11bdd30a05ac21b380fb7f67b8714\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f6719aa33e0815737dcef7624cf5c6b947b30216d15216e9337fded3c057316f\"" Jan 17 00:03:11.890540 containerd[1486]: time="2026-01-17T00:03:11.890225210Z" level=info msg="StartContainer for \"f6719aa33e0815737dcef7624cf5c6b947b30216d15216e9337fded3c057316f\"" Jan 17 00:03:11.895566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2113498427.mount: Deactivated successfully. Jan 17 00:03:11.930062 systemd[1]: run-containerd-runc-k8s.io-f6719aa33e0815737dcef7624cf5c6b947b30216d15216e9337fded3c057316f-runc.B0ONMn.mount: Deactivated successfully. Jan 17 00:03:11.939839 systemd[1]: Started cri-containerd-f6719aa33e0815737dcef7624cf5c6b947b30216d15216e9337fded3c057316f.scope - libcontainer container f6719aa33e0815737dcef7624cf5c6b947b30216d15216e9337fded3c057316f. Jan 17 00:03:11.974235 containerd[1486]: time="2026-01-17T00:03:11.974130879Z" level=info msg="StartContainer for \"f6719aa33e0815737dcef7624cf5c6b947b30216d15216e9337fded3c057316f\" returns successfully" Jan 17 00:03:12.002968 systemd[1]: cri-containerd-f6719aa33e0815737dcef7624cf5c6b947b30216d15216e9337fded3c057316f.scope: Deactivated successfully. Jan 17 00:03:12.003741 kubelet[2574]: E0117 00:03:12.003624 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rctkw" podUID="e730921e-fe6a-4325-b721-055844e798ac" Jan 17 00:03:12.040138 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6719aa33e0815737dcef7624cf5c6b947b30216d15216e9337fded3c057316f-rootfs.mount: Deactivated successfully. Jan 17 00:03:12.095667 containerd[1486]: time="2026-01-17T00:03:12.095419290Z" level=info msg="shim disconnected" id=f6719aa33e0815737dcef7624cf5c6b947b30216d15216e9337fded3c057316f namespace=k8s.io Jan 17 00:03:12.095667 containerd[1486]: time="2026-01-17T00:03:12.095519529Z" level=warning msg="cleaning up after shim disconnected" id=f6719aa33e0815737dcef7624cf5c6b947b30216d15216e9337fded3c057316f namespace=k8s.io Jan 17 00:03:12.095667 containerd[1486]: time="2026-01-17T00:03:12.095597009Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:03:14.002995 kubelet[2574]: E0117 00:03:14.002940 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rctkw" podUID="e730921e-fe6a-4325-b721-055844e798ac" Jan 17 00:03:14.184526 containerd[1486]: time="2026-01-17T00:03:14.184463265Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:14.186576 containerd[1486]: time="2026-01-17T00:03:14.186469732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31720858" Jan 17 00:03:14.187964 containerd[1486]: time="2026-01-17T00:03:14.187920602Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:14.191569 containerd[1486]: time="2026-01-17T00:03:14.190442305Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:14.192550 containerd[1486]: time="2026-01-17T00:03:14.191837335Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.325623687s" Jan 17 00:03:14.192550 containerd[1486]: time="2026-01-17T00:03:14.191890095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 17 00:03:14.196277 containerd[1486]: time="2026-01-17T00:03:14.194832075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 17 00:03:14.223498 containerd[1486]: time="2026-01-17T00:03:14.223248722Z" level=info msg="CreateContainer within sandbox \"6fd796799410708a95f90db5d90dc99fde09535c0f807e1934a6f759452891f3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 00:03:14.241755 containerd[1486]: time="2026-01-17T00:03:14.241677277Z" level=info msg="CreateContainer within sandbox \"6fd796799410708a95f90db5d90dc99fde09535c0f807e1934a6f759452891f3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1052fe9d34682ece4e2a91130110c9b7f366d2142a712fc21849a05159ca5124\"" Jan 17 00:03:14.243239 containerd[1486]: time="2026-01-17T00:03:14.243187107Z" level=info msg="StartContainer for \"1052fe9d34682ece4e2a91130110c9b7f366d2142a712fc21849a05159ca5124\"" Jan 17 00:03:14.277731 systemd[1]: Started cri-containerd-1052fe9d34682ece4e2a91130110c9b7f366d2142a712fc21849a05159ca5124.scope - libcontainer container 1052fe9d34682ece4e2a91130110c9b7f366d2142a712fc21849a05159ca5124. Jan 17 00:03:14.321675 containerd[1486]: time="2026-01-17T00:03:14.321619495Z" level=info msg="StartContainer for \"1052fe9d34682ece4e2a91130110c9b7f366d2142a712fc21849a05159ca5124\" returns successfully" Jan 17 00:03:15.173246 kubelet[2574]: I0117 00:03:15.172328 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-87d59cc9b-7955p" podStartSLOduration=2.967778027 podStartE2EDuration="6.172311678s" podCreationTimestamp="2026-01-17 00:03:09 +0000 UTC" firstStartedPulling="2026-01-17 00:03:10.98937855 +0000 UTC m=+28.128186757" lastFinishedPulling="2026-01-17 00:03:14.193912201 +0000 UTC m=+31.332720408" observedRunningTime="2026-01-17 00:03:15.171204645 +0000 UTC m=+32.310012892" watchObservedRunningTime="2026-01-17 00:03:15.172311678 +0000 UTC m=+32.311119885" Jan 17 00:03:16.002959 kubelet[2574]: E0117 00:03:16.002903 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rctkw" podUID="e730921e-fe6a-4325-b721-055844e798ac" Jan 17 00:03:16.158645 kubelet[2574]: I0117 00:03:16.158571 2574 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:03:17.183074 kubelet[2574]: I0117 00:03:17.181756 2574 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:03:17.755090 containerd[1486]: time="2026-01-17T00:03:17.755015716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:17.757351 containerd[1486]: time="2026-01-17T00:03:17.756909066Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 17 00:03:17.758617 containerd[1486]: time="2026-01-17T00:03:17.758550977Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:17.762732 containerd[1486]: time="2026-01-17T00:03:17.762656354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:17.764652 containerd[1486]: time="2026-01-17T00:03:17.764593463Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.568106599s" Jan 17 00:03:17.765030 containerd[1486]: time="2026-01-17T00:03:17.764857661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 17 00:03:17.772460 containerd[1486]: time="2026-01-17T00:03:17.772422219Z" level=info msg="CreateContainer within sandbox \"b070fbe191940350386858aa60021d2af9a11bdd30a05ac21b380fb7f67b8714\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 00:03:17.792156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount822019554.mount: Deactivated successfully. Jan 17 00:03:17.794798 containerd[1486]: time="2026-01-17T00:03:17.794713135Z" level=info msg="CreateContainer within sandbox \"b070fbe191940350386858aa60021d2af9a11bdd30a05ac21b380fb7f67b8714\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"923fedbba2488fef4a0310f162fd7dc252f8d67593f01659e6fc2b9440de2f94\"" Jan 17 00:03:17.796123 containerd[1486]: time="2026-01-17T00:03:17.795873248Z" level=info msg="StartContainer for \"923fedbba2488fef4a0310f162fd7dc252f8d67593f01659e6fc2b9440de2f94\"" Jan 17 00:03:17.832385 systemd[1]: run-containerd-runc-k8s.io-923fedbba2488fef4a0310f162fd7dc252f8d67593f01659e6fc2b9440de2f94-runc.ZMP0zo.mount: Deactivated successfully. Jan 17 00:03:17.842718 systemd[1]: Started cri-containerd-923fedbba2488fef4a0310f162fd7dc252f8d67593f01659e6fc2b9440de2f94.scope - libcontainer container 923fedbba2488fef4a0310f162fd7dc252f8d67593f01659e6fc2b9440de2f94. Jan 17 00:03:17.875860 containerd[1486]: time="2026-01-17T00:03:17.875787762Z" level=info msg="StartContainer for \"923fedbba2488fef4a0310f162fd7dc252f8d67593f01659e6fc2b9440de2f94\" returns successfully" Jan 17 00:03:18.004119 kubelet[2574]: E0117 00:03:18.004041 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rctkw" podUID="e730921e-fe6a-4325-b721-055844e798ac" Jan 17 00:03:18.392423 containerd[1486]: time="2026-01-17T00:03:18.392286372Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:03:18.395821 systemd[1]: cri-containerd-923fedbba2488fef4a0310f162fd7dc252f8d67593f01659e6fc2b9440de2f94.scope: Deactivated successfully. Jan 17 00:03:18.415527 kubelet[2574]: I0117 00:03:18.415099 2574 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 17 00:03:18.521958 systemd[1]: Created slice kubepods-burstable-pod4cd10ffc_3ab7_4de4_a249_8f0e6fd50b38.slice - libcontainer container kubepods-burstable-pod4cd10ffc_3ab7_4de4_a249_8f0e6fd50b38.slice. Jan 17 00:03:18.551932 systemd[1]: Created slice kubepods-burstable-pod4408c45d_c746_4759_bf63_32d8b6b15581.slice - libcontainer container kubepods-burstable-pod4408c45d_c746_4759_bf63_32d8b6b15581.slice. Jan 17 00:03:18.563329 systemd[1]: Created slice kubepods-besteffort-poda5e03e55_071e_4370_bbe3_a19857cfbfbd.slice - libcontainer container kubepods-besteffort-poda5e03e55_071e_4370_bbe3_a19857cfbfbd.slice. Jan 17 00:03:18.579070 systemd[1]: Created slice kubepods-besteffort-pod2ec5e3a5_4022_41f5_8198_9b4ac0d8306c.slice - libcontainer container kubepods-besteffort-pod2ec5e3a5_4022_41f5_8198_9b4ac0d8306c.slice. Jan 17 00:03:18.581338 kubelet[2574]: I0117 00:03:18.581299 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cd10ffc-3ab7-4de4-a249-8f0e6fd50b38-config-volume\") pod \"coredns-66bc5c9577-nkbkp\" (UID: \"4cd10ffc-3ab7-4de4-a249-8f0e6fd50b38\") " pod="kube-system/coredns-66bc5c9577-nkbkp" Jan 17 00:03:18.582683 containerd[1486]: time="2026-01-17T00:03:18.582317657Z" level=info msg="shim disconnected" id=923fedbba2488fef4a0310f162fd7dc252f8d67593f01659e6fc2b9440de2f94 namespace=k8s.io Jan 17 00:03:18.582683 containerd[1486]: time="2026-01-17T00:03:18.582414296Z" level=warning msg="cleaning up after shim disconnected" id=923fedbba2488fef4a0310f162fd7dc252f8d67593f01659e6fc2b9440de2f94 namespace=k8s.io Jan 17 00:03:18.582683 containerd[1486]: time="2026-01-17T00:03:18.582501096Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:03:18.582870 kubelet[2574]: I0117 00:03:18.582576 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz7tw\" (UniqueName: \"kubernetes.io/projected/4cd10ffc-3ab7-4de4-a249-8f0e6fd50b38-kube-api-access-hz7tw\") pod \"coredns-66bc5c9577-nkbkp\" (UID: \"4cd10ffc-3ab7-4de4-a249-8f0e6fd50b38\") " pod="kube-system/coredns-66bc5c9577-nkbkp" Jan 17 00:03:18.591205 systemd[1]: Created slice kubepods-besteffort-pod3a9d9fee_4b98_43fe_862d_a1e26e86f2ee.slice - libcontainer container kubepods-besteffort-pod3a9d9fee_4b98_43fe_862d_a1e26e86f2ee.slice. Jan 17 00:03:18.603950 systemd[1]: Created slice kubepods-besteffort-pode2865d0a_d4d2_402d_89fc_69d90c7c76b9.slice - libcontainer container kubepods-besteffort-pode2865d0a_d4d2_402d_89fc_69d90c7c76b9.slice. Jan 17 00:03:18.631101 systemd[1]: Created slice kubepods-besteffort-pod362a3452_c30b_406b_9bbb_9543b4b09e90.slice - libcontainer container kubepods-besteffort-pod362a3452_c30b_406b_9bbb_9543b4b09e90.slice. Jan 17 00:03:18.683746 kubelet[2574]: I0117 00:03:18.683536 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e2865d0a-d4d2-402d-89fc-69d90c7c76b9-calico-apiserver-certs\") pod \"calico-apiserver-798d7c56dc-ghv47\" (UID: \"e2865d0a-d4d2-402d-89fc-69d90c7c76b9\") " pod="calico-apiserver/calico-apiserver-798d7c56dc-ghv47" Jan 17 00:03:18.684386 kubelet[2574]: I0117 00:03:18.684342 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ec5e3a5-4022-41f5-8198-9b4ac0d8306c-whisker-ca-bundle\") pod \"whisker-7547b866bf-thdt2\" (UID: \"2ec5e3a5-4022-41f5-8198-9b4ac0d8306c\") " pod="calico-system/whisker-7547b866bf-thdt2" Jan 17 00:03:18.685288 kubelet[2574]: I0117 00:03:18.685258 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5e03e55-071e-4370-bbe3-a19857cfbfbd-tigera-ca-bundle\") pod \"calico-kube-controllers-7d698fdbf4-vwrcc\" (UID: \"a5e03e55-071e-4370-bbe3-a19857cfbfbd\") " pod="calico-system/calico-kube-controllers-7d698fdbf4-vwrcc" Jan 17 00:03:18.685645 kubelet[2574]: I0117 00:03:18.685554 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/362a3452-c30b-406b-9bbb-9543b4b09e90-config\") pod \"goldmane-7c778bb748-txw7d\" (UID: \"362a3452-c30b-406b-9bbb-9543b4b09e90\") " pod="calico-system/goldmane-7c778bb748-txw7d" Jan 17 00:03:18.685645 kubelet[2574]: I0117 00:03:18.685594 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26zph\" (UniqueName: \"kubernetes.io/projected/a5e03e55-071e-4370-bbe3-a19857cfbfbd-kube-api-access-26zph\") pod \"calico-kube-controllers-7d698fdbf4-vwrcc\" (UID: \"a5e03e55-071e-4370-bbe3-a19857cfbfbd\") " pod="calico-system/calico-kube-controllers-7d698fdbf4-vwrcc" Jan 17 00:03:18.685817 kubelet[2574]: I0117 00:03:18.685679 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z82pb\" (UniqueName: \"kubernetes.io/projected/2ec5e3a5-4022-41f5-8198-9b4ac0d8306c-kube-api-access-z82pb\") pod \"whisker-7547b866bf-thdt2\" (UID: \"2ec5e3a5-4022-41f5-8198-9b4ac0d8306c\") " pod="calico-system/whisker-7547b866bf-thdt2" Jan 17 00:03:18.685817 kubelet[2574]: I0117 00:03:18.685724 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szr9p\" (UniqueName: \"kubernetes.io/projected/3a9d9fee-4b98-43fe-862d-a1e26e86f2ee-kube-api-access-szr9p\") pod \"calico-apiserver-798d7c56dc-6ghq5\" (UID: \"3a9d9fee-4b98-43fe-862d-a1e26e86f2ee\") " pod="calico-apiserver/calico-apiserver-798d7c56dc-6ghq5" Jan 17 00:03:18.685817 kubelet[2574]: I0117 00:03:18.685742 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwgr9\" (UniqueName: \"kubernetes.io/projected/4408c45d-c746-4759-bf63-32d8b6b15581-kube-api-access-wwgr9\") pod \"coredns-66bc5c9577-v9lnn\" (UID: \"4408c45d-c746-4759-bf63-32d8b6b15581\") " pod="kube-system/coredns-66bc5c9577-v9lnn" Jan 17 00:03:18.685817 kubelet[2574]: I0117 00:03:18.685765 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3a9d9fee-4b98-43fe-862d-a1e26e86f2ee-calico-apiserver-certs\") pod \"calico-apiserver-798d7c56dc-6ghq5\" (UID: \"3a9d9fee-4b98-43fe-862d-a1e26e86f2ee\") " pod="calico-apiserver/calico-apiserver-798d7c56dc-6ghq5" Jan 17 00:03:18.686905 kubelet[2574]: I0117 00:03:18.685922 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4408c45d-c746-4759-bf63-32d8b6b15581-config-volume\") pod \"coredns-66bc5c9577-v9lnn\" (UID: \"4408c45d-c746-4759-bf63-32d8b6b15581\") " pod="kube-system/coredns-66bc5c9577-v9lnn" Jan 17 00:03:18.686905 kubelet[2574]: I0117 00:03:18.685954 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m45hq\" (UniqueName: \"kubernetes.io/projected/362a3452-c30b-406b-9bbb-9543b4b09e90-kube-api-access-m45hq\") pod \"goldmane-7c778bb748-txw7d\" (UID: \"362a3452-c30b-406b-9bbb-9543b4b09e90\") " pod="calico-system/goldmane-7c778bb748-txw7d" Jan 17 00:03:18.686905 kubelet[2574]: I0117 00:03:18.686025 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/362a3452-c30b-406b-9bbb-9543b4b09e90-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-txw7d\" (UID: \"362a3452-c30b-406b-9bbb-9543b4b09e90\") " pod="calico-system/goldmane-7c778bb748-txw7d" Jan 17 00:03:18.686905 kubelet[2574]: I0117 00:03:18.686051 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnfd2\" (UniqueName: \"kubernetes.io/projected/e2865d0a-d4d2-402d-89fc-69d90c7c76b9-kube-api-access-rnfd2\") pod \"calico-apiserver-798d7c56dc-ghv47\" (UID: \"e2865d0a-d4d2-402d-89fc-69d90c7c76b9\") " pod="calico-apiserver/calico-apiserver-798d7c56dc-ghv47" Jan 17 00:03:18.686905 kubelet[2574]: I0117 00:03:18.686069 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2ec5e3a5-4022-41f5-8198-9b4ac0d8306c-whisker-backend-key-pair\") pod \"whisker-7547b866bf-thdt2\" (UID: \"2ec5e3a5-4022-41f5-8198-9b4ac0d8306c\") " pod="calico-system/whisker-7547b866bf-thdt2" Jan 17 00:03:18.687048 kubelet[2574]: I0117 00:03:18.686086 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/362a3452-c30b-406b-9bbb-9543b4b09e90-goldmane-key-pair\") pod \"goldmane-7c778bb748-txw7d\" (UID: \"362a3452-c30b-406b-9bbb-9543b4b09e90\") " pod="calico-system/goldmane-7c778bb748-txw7d" Jan 17 00:03:18.794655 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-923fedbba2488fef4a0310f162fd7dc252f8d67593f01659e6fc2b9440de2f94-rootfs.mount: Deactivated successfully. Jan 17 00:03:18.838069 containerd[1486]: time="2026-01-17T00:03:18.837946358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nkbkp,Uid:4cd10ffc-3ab7-4de4-a249-8f0e6fd50b38,Namespace:kube-system,Attempt:0,}" Jan 17 00:03:18.864154 containerd[1486]: time="2026-01-17T00:03:18.863704583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-v9lnn,Uid:4408c45d-c746-4759-bf63-32d8b6b15581,Namespace:kube-system,Attempt:0,}" Jan 17 00:03:18.875519 containerd[1486]: time="2026-01-17T00:03:18.874562606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d698fdbf4-vwrcc,Uid:a5e03e55-071e-4370-bbe3-a19857cfbfbd,Namespace:calico-system,Attempt:0,}" Jan 17 00:03:18.888671 containerd[1486]: time="2026-01-17T00:03:18.887801696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7547b866bf-thdt2,Uid:2ec5e3a5-4022-41f5-8198-9b4ac0d8306c,Namespace:calico-system,Attempt:0,}" Jan 17 00:03:18.911441 containerd[1486]: time="2026-01-17T00:03:18.911179854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-798d7c56dc-6ghq5,Uid:3a9d9fee-4b98-43fe-862d-a1e26e86f2ee,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:03:18.929149 containerd[1486]: time="2026-01-17T00:03:18.929101680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-798d7c56dc-ghv47,Uid:e2865d0a-d4d2-402d-89fc-69d90c7c76b9,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:03:18.937976 containerd[1486]: time="2026-01-17T00:03:18.937835514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-txw7d,Uid:362a3452-c30b-406b-9bbb-9543b4b09e90,Namespace:calico-system,Attempt:0,}" Jan 17 00:03:19.026785 containerd[1486]: time="2026-01-17T00:03:19.026730217Z" level=error msg="Failed to destroy network for sandbox \"ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.032380 containerd[1486]: time="2026-01-17T00:03:19.032321550Z" level=error msg="encountered an error cleaning up failed sandbox \"ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.032654 containerd[1486]: time="2026-01-17T00:03:19.032610588Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nkbkp,Uid:4cd10ffc-3ab7-4de4-a249-8f0e6fd50b38,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.033103 kubelet[2574]: E0117 00:03:19.033051 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.033208 kubelet[2574]: E0117 00:03:19.033129 2574 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nkbkp" Jan 17 00:03:19.033208 kubelet[2574]: E0117 00:03:19.033158 2574 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nkbkp" Jan 17 00:03:19.033260 kubelet[2574]: E0117 00:03:19.033218 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-nkbkp_kube-system(4cd10ffc-3ab7-4de4-a249-8f0e6fd50b38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-nkbkp_kube-system(4cd10ffc-3ab7-4de4-a249-8f0e6fd50b38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-nkbkp" podUID="4cd10ffc-3ab7-4de4-a249-8f0e6fd50b38" Jan 17 00:03:19.053245 containerd[1486]: time="2026-01-17T00:03:19.053188687Z" level=error msg="Failed to destroy network for sandbox \"5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.054459 containerd[1486]: time="2026-01-17T00:03:19.054411081Z" level=error msg="encountered an error cleaning up failed sandbox \"5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.056693 containerd[1486]: time="2026-01-17T00:03:19.056639590Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-v9lnn,Uid:4408c45d-c746-4759-bf63-32d8b6b15581,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.057288 kubelet[2574]: E0117 00:03:19.057214 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.057579 kubelet[2574]: E0117 00:03:19.057309 2574 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-v9lnn" Jan 17 00:03:19.057579 kubelet[2574]: E0117 00:03:19.057349 2574 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-v9lnn" Jan 17 00:03:19.057579 kubelet[2574]: E0117 00:03:19.057425 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-v9lnn_kube-system(4408c45d-c746-4759-bf63-32d8b6b15581)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-v9lnn_kube-system(4408c45d-c746-4759-bf63-32d8b6b15581)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-v9lnn" podUID="4408c45d-c746-4759-bf63-32d8b6b15581" Jan 17 00:03:19.093663 containerd[1486]: time="2026-01-17T00:03:19.093561289Z" level=error msg="Failed to destroy network for sandbox \"27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.094368 containerd[1486]: time="2026-01-17T00:03:19.094147006Z" level=error msg="encountered an error cleaning up failed sandbox \"27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.094368 containerd[1486]: time="2026-01-17T00:03:19.094219326Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d698fdbf4-vwrcc,Uid:a5e03e55-071e-4370-bbe3-a19857cfbfbd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.094569 kubelet[2574]: E0117 00:03:19.094458 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.094569 kubelet[2574]: E0117 00:03:19.094546 2574 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d698fdbf4-vwrcc" Jan 17 00:03:19.094569 kubelet[2574]: E0117 00:03:19.094567 2574 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d698fdbf4-vwrcc" Jan 17 00:03:19.094675 kubelet[2574]: E0117 00:03:19.094618 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d698fdbf4-vwrcc_calico-system(a5e03e55-071e-4370-bbe3-a19857cfbfbd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d698fdbf4-vwrcc_calico-system(a5e03e55-071e-4370-bbe3-a19857cfbfbd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d698fdbf4-vwrcc" podUID="a5e03e55-071e-4370-bbe3-a19857cfbfbd" Jan 17 00:03:19.130356 containerd[1486]: time="2026-01-17T00:03:19.129745791Z" level=error msg="Failed to destroy network for sandbox \"9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.130356 containerd[1486]: time="2026-01-17T00:03:19.130203309Z" level=error msg="encountered an error cleaning up failed sandbox \"9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.130356 containerd[1486]: time="2026-01-17T00:03:19.130277069Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7547b866bf-thdt2,Uid:2ec5e3a5-4022-41f5-8198-9b4ac0d8306c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.131290 kubelet[2574]: E0117 00:03:19.130798 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.131290 kubelet[2574]: E0117 00:03:19.130865 2574 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7547b866bf-thdt2" Jan 17 00:03:19.131290 kubelet[2574]: E0117 00:03:19.130886 2574 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7547b866bf-thdt2" Jan 17 00:03:19.131465 kubelet[2574]: E0117 00:03:19.130947 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7547b866bf-thdt2_calico-system(2ec5e3a5-4022-41f5-8198-9b4ac0d8306c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7547b866bf-thdt2_calico-system(2ec5e3a5-4022-41f5-8198-9b4ac0d8306c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7547b866bf-thdt2" podUID="2ec5e3a5-4022-41f5-8198-9b4ac0d8306c" Jan 17 00:03:19.135095 containerd[1486]: time="2026-01-17T00:03:19.135034485Z" level=error msg="Failed to destroy network for sandbox \"9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.135785 containerd[1486]: time="2026-01-17T00:03:19.135745722Z" level=error msg="encountered an error cleaning up failed sandbox \"9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.135912 containerd[1486]: time="2026-01-17T00:03:19.135892041Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-798d7c56dc-6ghq5,Uid:3a9d9fee-4b98-43fe-862d-a1e26e86f2ee,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.136470 kubelet[2574]: E0117 00:03:19.136284 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.136470 kubelet[2574]: E0117 00:03:19.136341 2574 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-798d7c56dc-6ghq5" Jan 17 00:03:19.136470 kubelet[2574]: E0117 00:03:19.136361 2574 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-798d7c56dc-6ghq5" Jan 17 00:03:19.136725 kubelet[2574]: E0117 00:03:19.136424 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-798d7c56dc-6ghq5_calico-apiserver(3a9d9fee-4b98-43fe-862d-a1e26e86f2ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-798d7c56dc-6ghq5_calico-apiserver(3a9d9fee-4b98-43fe-862d-a1e26e86f2ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-798d7c56dc-6ghq5" podUID="3a9d9fee-4b98-43fe-862d-a1e26e86f2ee" Jan 17 00:03:19.148765 containerd[1486]: time="2026-01-17T00:03:19.148704938Z" level=error msg="Failed to destroy network for sandbox \"ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.149122 containerd[1486]: time="2026-01-17T00:03:19.149079936Z" level=error msg="encountered an error cleaning up failed sandbox \"ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.149197 containerd[1486]: time="2026-01-17T00:03:19.149164736Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-txw7d,Uid:362a3452-c30b-406b-9bbb-9543b4b09e90,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.149580 kubelet[2574]: E0117 00:03:19.149533 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.149674 kubelet[2574]: E0117 00:03:19.149604 2574 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-txw7d" Jan 17 00:03:19.149674 kubelet[2574]: E0117 00:03:19.149626 2574 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-txw7d" Jan 17 00:03:19.149724 kubelet[2574]: E0117 00:03:19.149682 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-txw7d_calico-system(362a3452-c30b-406b-9bbb-9543b4b09e90)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-txw7d_calico-system(362a3452-c30b-406b-9bbb-9543b4b09e90)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-txw7d" podUID="362a3452-c30b-406b-9bbb-9543b4b09e90" Jan 17 00:03:19.159185 containerd[1486]: time="2026-01-17T00:03:19.159124927Z" level=error msg="Failed to destroy network for sandbox \"8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.159866 containerd[1486]: time="2026-01-17T00:03:19.159743284Z" level=error msg="encountered an error cleaning up failed sandbox \"8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.159866 containerd[1486]: time="2026-01-17T00:03:19.159805524Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-798d7c56dc-ghv47,Uid:e2865d0a-d4d2-402d-89fc-69d90c7c76b9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.160962 kubelet[2574]: E0117 00:03:19.160488 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.160962 kubelet[2574]: E0117 00:03:19.160581 2574 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-798d7c56dc-ghv47" Jan 17 00:03:19.160962 kubelet[2574]: E0117 00:03:19.160605 2574 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-798d7c56dc-ghv47" Jan 17 00:03:19.161162 kubelet[2574]: E0117 00:03:19.160661 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-798d7c56dc-ghv47_calico-apiserver(e2865d0a-d4d2-402d-89fc-69d90c7c76b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-798d7c56dc-ghv47_calico-apiserver(e2865d0a-d4d2-402d-89fc-69d90c7c76b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-798d7c56dc-ghv47" podUID="e2865d0a-d4d2-402d-89fc-69d90c7c76b9" Jan 17 00:03:19.177810 kubelet[2574]: I0117 00:03:19.176105 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" Jan 17 00:03:19.177958 containerd[1486]: time="2026-01-17T00:03:19.177605796Z" level=info msg="StopPodSandbox for \"ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37\"" Jan 17 00:03:19.178261 containerd[1486]: time="2026-01-17T00:03:19.178235033Z" level=info msg="Ensure that sandbox ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37 in task-service has been cleanup successfully" Jan 17 00:03:19.181046 kubelet[2574]: I0117 00:03:19.180977 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" Jan 17 00:03:19.182477 containerd[1486]: time="2026-01-17T00:03:19.182440532Z" level=info msg="StopPodSandbox for \"9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f\"" Jan 17 00:03:19.182981 containerd[1486]: time="2026-01-17T00:03:19.182870610Z" level=info msg="Ensure that sandbox 9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f in task-service has been cleanup successfully" Jan 17 00:03:19.185009 containerd[1486]: time="2026-01-17T00:03:19.184962480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 17 00:03:19.197648 kubelet[2574]: I0117 00:03:19.197076 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" Jan 17 00:03:19.202833 containerd[1486]: time="2026-01-17T00:03:19.201726598Z" level=info msg="StopPodSandbox for \"27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df\"" Jan 17 00:03:19.202833 containerd[1486]: time="2026-01-17T00:03:19.201920077Z" level=info msg="Ensure that sandbox 27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df in task-service has been cleanup successfully" Jan 17 00:03:19.217601 kubelet[2574]: I0117 00:03:19.217557 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" Jan 17 00:03:19.219932 containerd[1486]: time="2026-01-17T00:03:19.219886989Z" level=info msg="StopPodSandbox for \"5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b\"" Jan 17 00:03:19.222192 containerd[1486]: time="2026-01-17T00:03:19.221998298Z" level=info msg="Ensure that sandbox 5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b in task-service has been cleanup successfully" Jan 17 00:03:19.233131 kubelet[2574]: I0117 00:03:19.233080 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" Jan 17 00:03:19.234468 containerd[1486]: time="2026-01-17T00:03:19.234319278Z" level=info msg="StopPodSandbox for \"8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9\"" Jan 17 00:03:19.237438 containerd[1486]: time="2026-01-17T00:03:19.237129904Z" level=info msg="Ensure that sandbox 8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9 in task-service has been cleanup successfully" Jan 17 00:03:19.245766 kubelet[2574]: I0117 00:03:19.245729 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" Jan 17 00:03:19.250215 containerd[1486]: time="2026-01-17T00:03:19.250088680Z" level=info msg="StopPodSandbox for \"9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7\"" Jan 17 00:03:19.252315 containerd[1486]: time="2026-01-17T00:03:19.250869156Z" level=info msg="Ensure that sandbox 9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7 in task-service has been cleanup successfully" Jan 17 00:03:19.258422 kubelet[2574]: I0117 00:03:19.258386 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" Jan 17 00:03:19.262731 containerd[1486]: time="2026-01-17T00:03:19.262476339Z" level=info msg="StopPodSandbox for \"ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564\"" Jan 17 00:03:19.262929 containerd[1486]: time="2026-01-17T00:03:19.262898697Z" level=info msg="Ensure that sandbox ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564 in task-service has been cleanup successfully" Jan 17 00:03:19.296858 containerd[1486]: time="2026-01-17T00:03:19.296744811Z" level=error msg="StopPodSandbox for \"ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37\" failed" error="failed to destroy network for sandbox \"ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.300919 kubelet[2574]: E0117 00:03:19.300845 2574 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" Jan 17 00:03:19.300919 kubelet[2574]: E0117 00:03:19.300905 2574 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37"} Jan 17 00:03:19.301118 kubelet[2574]: E0117 00:03:19.300959 2574 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"362a3452-c30b-406b-9bbb-9543b4b09e90\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:03:19.301118 kubelet[2574]: E0117 00:03:19.301000 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"362a3452-c30b-406b-9bbb-9543b4b09e90\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-txw7d" podUID="362a3452-c30b-406b-9bbb-9543b4b09e90" Jan 17 00:03:19.333107 containerd[1486]: time="2026-01-17T00:03:19.332759154Z" level=error msg="StopPodSandbox for \"9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f\" failed" error="failed to destroy network for sandbox \"9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.334046 kubelet[2574]: E0117 00:03:19.333802 2574 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" Jan 17 00:03:19.334046 kubelet[2574]: E0117 00:03:19.333874 2574 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f"} Jan 17 00:03:19.334046 kubelet[2574]: E0117 00:03:19.333914 2574 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3a9d9fee-4b98-43fe-862d-a1e26e86f2ee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:03:19.334046 kubelet[2574]: E0117 00:03:19.333948 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3a9d9fee-4b98-43fe-862d-a1e26e86f2ee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-798d7c56dc-6ghq5" podUID="3a9d9fee-4b98-43fe-862d-a1e26e86f2ee" Jan 17 00:03:19.334708 containerd[1486]: time="2026-01-17T00:03:19.334549785Z" level=error msg="StopPodSandbox for \"9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7\" failed" error="failed to destroy network for sandbox \"9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.335048 kubelet[2574]: E0117 00:03:19.334965 2574 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" Jan 17 00:03:19.335048 kubelet[2574]: E0117 00:03:19.335027 2574 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7"} Jan 17 00:03:19.335255 kubelet[2574]: E0117 00:03:19.335057 2574 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2ec5e3a5-4022-41f5-8198-9b4ac0d8306c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:03:19.335255 kubelet[2574]: E0117 00:03:19.335112 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2ec5e3a5-4022-41f5-8198-9b4ac0d8306c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7547b866bf-thdt2" podUID="2ec5e3a5-4022-41f5-8198-9b4ac0d8306c" Jan 17 00:03:19.345728 containerd[1486]: time="2026-01-17T00:03:19.345422612Z" level=error msg="StopPodSandbox for \"ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564\" failed" error="failed to destroy network for sandbox \"ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.346060 kubelet[2574]: E0117 00:03:19.345748 2574 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" Jan 17 00:03:19.346060 kubelet[2574]: E0117 00:03:19.345807 2574 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564"} Jan 17 00:03:19.346060 kubelet[2574]: E0117 00:03:19.345844 2574 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4cd10ffc-3ab7-4de4-a249-8f0e6fd50b38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:03:19.346060 kubelet[2574]: E0117 00:03:19.345871 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4cd10ffc-3ab7-4de4-a249-8f0e6fd50b38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-nkbkp" podUID="4cd10ffc-3ab7-4de4-a249-8f0e6fd50b38" Jan 17 00:03:19.353123 containerd[1486]: time="2026-01-17T00:03:19.352700256Z" level=error msg="StopPodSandbox for \"5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b\" failed" error="failed to destroy network for sandbox \"5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.353653 kubelet[2574]: E0117 00:03:19.353001 2574 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" Jan 17 00:03:19.353653 kubelet[2574]: E0117 00:03:19.353059 2574 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b"} Jan 17 00:03:19.353653 kubelet[2574]: E0117 00:03:19.353093 2574 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4408c45d-c746-4759-bf63-32d8b6b15581\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:03:19.353653 kubelet[2574]: E0117 00:03:19.353118 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4408c45d-c746-4759-bf63-32d8b6b15581\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-v9lnn" podUID="4408c45d-c746-4759-bf63-32d8b6b15581" Jan 17 00:03:19.355150 containerd[1486]: time="2026-01-17T00:03:19.355098925Z" level=error msg="StopPodSandbox for \"27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df\" failed" error="failed to destroy network for sandbox \"27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.355702 kubelet[2574]: E0117 00:03:19.355587 2574 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" Jan 17 00:03:19.355702 kubelet[2574]: E0117 00:03:19.355750 2574 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df"} Jan 17 00:03:19.355702 kubelet[2574]: E0117 00:03:19.355786 2574 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a5e03e55-071e-4370-bbe3-a19857cfbfbd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:03:19.356368 kubelet[2574]: E0117 00:03:19.356133 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a5e03e55-071e-4370-bbe3-a19857cfbfbd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d698fdbf4-vwrcc" podUID="a5e03e55-071e-4370-bbe3-a19857cfbfbd" Jan 17 00:03:19.364346 containerd[1486]: time="2026-01-17T00:03:19.364252760Z" level=error msg="StopPodSandbox for \"8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9\" failed" error="failed to destroy network for sandbox \"8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:19.364854 kubelet[2574]: E0117 00:03:19.364575 2574 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" Jan 17 00:03:19.364854 kubelet[2574]: E0117 00:03:19.364637 2574 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9"} Jan 17 00:03:19.364854 kubelet[2574]: E0117 00:03:19.364680 2574 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e2865d0a-d4d2-402d-89fc-69d90c7c76b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:03:19.364854 kubelet[2574]: E0117 00:03:19.364709 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e2865d0a-d4d2-402d-89fc-69d90c7c76b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-798d7c56dc-ghv47" podUID="e2865d0a-d4d2-402d-89fc-69d90c7c76b9" Jan 17 00:03:20.015197 systemd[1]: Created slice kubepods-besteffort-pode730921e_fe6a_4325_b721_055844e798ac.slice - libcontainer container kubepods-besteffort-pode730921e_fe6a_4325_b721_055844e798ac.slice. Jan 17 00:03:20.021522 containerd[1486]: time="2026-01-17T00:03:20.021450018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rctkw,Uid:e730921e-fe6a-4325-b721-055844e798ac,Namespace:calico-system,Attempt:0,}" Jan 17 00:03:20.089749 containerd[1486]: time="2026-01-17T00:03:20.089683664Z" level=error msg="Failed to destroy network for sandbox \"a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:20.090245 containerd[1486]: time="2026-01-17T00:03:20.090201262Z" level=error msg="encountered an error cleaning up failed sandbox \"a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:20.090304 containerd[1486]: time="2026-01-17T00:03:20.090278741Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rctkw,Uid:e730921e-fe6a-4325-b721-055844e798ac,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:20.091634 kubelet[2574]: E0117 00:03:20.091566 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:20.092104 kubelet[2574]: E0117 00:03:20.091652 2574 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rctkw" Jan 17 00:03:20.092104 kubelet[2574]: E0117 00:03:20.091684 2574 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rctkw" Jan 17 00:03:20.092104 kubelet[2574]: E0117 00:03:20.091758 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rctkw_calico-system(e730921e-fe6a-4325-b721-055844e798ac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rctkw_calico-system(e730921e-fe6a-4325-b721-055844e798ac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rctkw" podUID="e730921e-fe6a-4325-b721-055844e798ac" Jan 17 00:03:20.096340 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304-shm.mount: Deactivated successfully. Jan 17 00:03:20.265955 kubelet[2574]: I0117 00:03:20.265831 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" Jan 17 00:03:20.267763 containerd[1486]: time="2026-01-17T00:03:20.267699325Z" level=info msg="StopPodSandbox for \"a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304\"" Jan 17 00:03:20.267930 containerd[1486]: time="2026-01-17T00:03:20.267895524Z" level=info msg="Ensure that sandbox a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304 in task-service has been cleanup successfully" Jan 17 00:03:20.295246 containerd[1486]: time="2026-01-17T00:03:20.295188518Z" level=error msg="StopPodSandbox for \"a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304\" failed" error="failed to destroy network for sandbox \"a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:20.295673 kubelet[2574]: E0117 00:03:20.295620 2574 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" Jan 17 00:03:20.295767 kubelet[2574]: E0117 00:03:20.295698 2574 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304"} Jan 17 00:03:20.295809 kubelet[2574]: E0117 00:03:20.295744 2574 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e730921e-fe6a-4325-b721-055844e798ac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:03:20.295809 kubelet[2574]: E0117 00:03:20.295795 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e730921e-fe6a-4325-b721-055844e798ac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rctkw" podUID="e730921e-fe6a-4325-b721-055844e798ac" Jan 17 00:03:25.841844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2235721085.mount: Deactivated successfully. Jan 17 00:03:25.869766 containerd[1486]: time="2026-01-17T00:03:25.869691258Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:25.871145 containerd[1486]: time="2026-01-17T00:03:25.870926734Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 17 00:03:25.873546 containerd[1486]: time="2026-01-17T00:03:25.872375209Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:25.875761 containerd[1486]: time="2026-01-17T00:03:25.875717598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:25.877162 containerd[1486]: time="2026-01-17T00:03:25.877102993Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.692077473s" Jan 17 00:03:25.877162 containerd[1486]: time="2026-01-17T00:03:25.877158073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 17 00:03:25.902016 containerd[1486]: time="2026-01-17T00:03:25.901922190Z" level=info msg="CreateContainer within sandbox \"b070fbe191940350386858aa60021d2af9a11bdd30a05ac21b380fb7f67b8714\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 00:03:25.923347 containerd[1486]: time="2026-01-17T00:03:25.923296399Z" level=info msg="CreateContainer within sandbox \"b070fbe191940350386858aa60021d2af9a11bdd30a05ac21b380fb7f67b8714\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"82ba6b896debd124731a3ce7e70b90d937b6b1b308e740825f7c957618dce863\"" Jan 17 00:03:25.924502 containerd[1486]: time="2026-01-17T00:03:25.924471555Z" level=info msg="StartContainer for \"82ba6b896debd124731a3ce7e70b90d937b6b1b308e740825f7c957618dce863\"" Jan 17 00:03:25.959713 systemd[1]: Started cri-containerd-82ba6b896debd124731a3ce7e70b90d937b6b1b308e740825f7c957618dce863.scope - libcontainer container 82ba6b896debd124731a3ce7e70b90d937b6b1b308e740825f7c957618dce863. Jan 17 00:03:25.996477 containerd[1486]: time="2026-01-17T00:03:25.996423795Z" level=info msg="StartContainer for \"82ba6b896debd124731a3ce7e70b90d937b6b1b308e740825f7c957618dce863\" returns successfully" Jan 17 00:03:26.151880 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 00:03:26.152054 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 00:03:26.341009 containerd[1486]: time="2026-01-17T00:03:26.340449839Z" level=info msg="StopPodSandbox for \"9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7\"" Jan 17 00:03:26.351373 kubelet[2574]: I0117 00:03:26.351284 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9ncfr" podStartSLOduration=1.70109696 podStartE2EDuration="17.351265565s" podCreationTimestamp="2026-01-17 00:03:09 +0000 UTC" firstStartedPulling="2026-01-17 00:03:10.229793699 +0000 UTC m=+27.368601906" lastFinishedPulling="2026-01-17 00:03:25.879962344 +0000 UTC m=+43.018770511" observedRunningTime="2026-01-17 00:03:26.343502549 +0000 UTC m=+43.482310756" watchObservedRunningTime="2026-01-17 00:03:26.351265565 +0000 UTC m=+43.490073772" Jan 17 00:03:26.569245 containerd[1486]: 2026-01-17 00:03:26.471 [INFO][3785] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" Jan 17 00:03:26.569245 containerd[1486]: 2026-01-17 00:03:26.471 [INFO][3785] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" iface="eth0" netns="/var/run/netns/cni-59405740-8bcd-7611-ff07-d64b8102ff7a" Jan 17 00:03:26.569245 containerd[1486]: 2026-01-17 00:03:26.471 [INFO][3785] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" iface="eth0" netns="/var/run/netns/cni-59405740-8bcd-7611-ff07-d64b8102ff7a" Jan 17 00:03:26.569245 containerd[1486]: 2026-01-17 00:03:26.474 [INFO][3785] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" iface="eth0" netns="/var/run/netns/cni-59405740-8bcd-7611-ff07-d64b8102ff7a" Jan 17 00:03:26.569245 containerd[1486]: 2026-01-17 00:03:26.475 [INFO][3785] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" Jan 17 00:03:26.569245 containerd[1486]: 2026-01-17 00:03:26.475 [INFO][3785] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" Jan 17 00:03:26.569245 containerd[1486]: 2026-01-17 00:03:26.549 [INFO][3804] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" HandleID="k8s-pod-network.9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" Workload="ci--4081--3--6--n--089d3b6582-k8s-whisker--7547b866bf--thdt2-eth0" Jan 17 00:03:26.569245 containerd[1486]: 2026-01-17 00:03:26.549 [INFO][3804] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:26.569245 containerd[1486]: 2026-01-17 00:03:26.549 [INFO][3804] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:26.569245 containerd[1486]: 2026-01-17 00:03:26.560 [WARNING][3804] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" HandleID="k8s-pod-network.9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" Workload="ci--4081--3--6--n--089d3b6582-k8s-whisker--7547b866bf--thdt2-eth0" Jan 17 00:03:26.569245 containerd[1486]: 2026-01-17 00:03:26.561 [INFO][3804] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" HandleID="k8s-pod-network.9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" Workload="ci--4081--3--6--n--089d3b6582-k8s-whisker--7547b866bf--thdt2-eth0" Jan 17 00:03:26.569245 containerd[1486]: 2026-01-17 00:03:26.563 [INFO][3804] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:26.569245 containerd[1486]: 2026-01-17 00:03:26.566 [INFO][3785] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" Jan 17 00:03:26.570529 containerd[1486]: time="2026-01-17T00:03:26.570304040Z" level=info msg="TearDown network for sandbox \"9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7\" successfully" Jan 17 00:03:26.570529 containerd[1486]: time="2026-01-17T00:03:26.570347240Z" level=info msg="StopPodSandbox for \"9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7\" returns successfully" Jan 17 00:03:26.658040 kubelet[2574]: I0117 00:03:26.657288 2574 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ec5e3a5-4022-41f5-8198-9b4ac0d8306c-whisker-ca-bundle\") pod \"2ec5e3a5-4022-41f5-8198-9b4ac0d8306c\" (UID: \"2ec5e3a5-4022-41f5-8198-9b4ac0d8306c\") " Jan 17 00:03:26.658040 kubelet[2574]: I0117 00:03:26.657347 2574 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2ec5e3a5-4022-41f5-8198-9b4ac0d8306c-whisker-backend-key-pair\") pod \"2ec5e3a5-4022-41f5-8198-9b4ac0d8306c\" (UID: \"2ec5e3a5-4022-41f5-8198-9b4ac0d8306c\") " Jan 17 00:03:26.658040 kubelet[2574]: I0117 00:03:26.657389 2574 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z82pb\" (UniqueName: \"kubernetes.io/projected/2ec5e3a5-4022-41f5-8198-9b4ac0d8306c-kube-api-access-z82pb\") pod \"2ec5e3a5-4022-41f5-8198-9b4ac0d8306c\" (UID: \"2ec5e3a5-4022-41f5-8198-9b4ac0d8306c\") " Jan 17 00:03:26.658040 kubelet[2574]: I0117 00:03:26.657804 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ec5e3a5-4022-41f5-8198-9b4ac0d8306c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2ec5e3a5-4022-41f5-8198-9b4ac0d8306c" (UID: "2ec5e3a5-4022-41f5-8198-9b4ac0d8306c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:03:26.665152 kubelet[2574]: I0117 00:03:26.664959 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ec5e3a5-4022-41f5-8198-9b4ac0d8306c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2ec5e3a5-4022-41f5-8198-9b4ac0d8306c" (UID: "2ec5e3a5-4022-41f5-8198-9b4ac0d8306c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:03:26.665540 kubelet[2574]: I0117 00:03:26.665434 2574 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ec5e3a5-4022-41f5-8198-9b4ac0d8306c-kube-api-access-z82pb" (OuterVolumeSpecName: "kube-api-access-z82pb") pod "2ec5e3a5-4022-41f5-8198-9b4ac0d8306c" (UID: "2ec5e3a5-4022-41f5-8198-9b4ac0d8306c"). InnerVolumeSpecName "kube-api-access-z82pb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:03:26.758809 kubelet[2574]: I0117 00:03:26.758732 2574 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z82pb\" (UniqueName: \"kubernetes.io/projected/2ec5e3a5-4022-41f5-8198-9b4ac0d8306c-kube-api-access-z82pb\") on node \"ci-4081-3-6-n-089d3b6582\" DevicePath \"\"" Jan 17 00:03:26.758809 kubelet[2574]: I0117 00:03:26.758770 2574 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ec5e3a5-4022-41f5-8198-9b4ac0d8306c-whisker-ca-bundle\") on node \"ci-4081-3-6-n-089d3b6582\" DevicePath \"\"" Jan 17 00:03:26.758809 kubelet[2574]: I0117 00:03:26.758783 2574 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2ec5e3a5-4022-41f5-8198-9b4ac0d8306c-whisker-backend-key-pair\") on node \"ci-4081-3-6-n-089d3b6582\" DevicePath \"\"" Jan 17 00:03:26.841297 systemd[1]: run-netns-cni\x2d59405740\x2d8bcd\x2d7611\x2dff07\x2dd64b8102ff7a.mount: Deactivated successfully. Jan 17 00:03:26.841428 systemd[1]: var-lib-kubelet-pods-2ec5e3a5\x2d4022\x2d41f5\x2d8198\x2d9b4ac0d8306c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz82pb.mount: Deactivated successfully. Jan 17 00:03:26.841484 systemd[1]: var-lib-kubelet-pods-2ec5e3a5\x2d4022\x2d41f5\x2d8198\x2d9b4ac0d8306c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 17 00:03:27.012802 systemd[1]: Removed slice kubepods-besteffort-pod2ec5e3a5_4022_41f5_8198_9b4ac0d8306c.slice - libcontainer container kubepods-besteffort-pod2ec5e3a5_4022_41f5_8198_9b4ac0d8306c.slice. Jan 17 00:03:27.402253 systemd[1]: Created slice kubepods-besteffort-pod1edd65d8_b5e4_447f_a4cd_2de7f77232a4.slice - libcontainer container kubepods-besteffort-pod1edd65d8_b5e4_447f_a4cd_2de7f77232a4.slice. Jan 17 00:03:27.465050 kubelet[2574]: I0117 00:03:27.464837 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv68p\" (UniqueName: \"kubernetes.io/projected/1edd65d8-b5e4-447f-a4cd-2de7f77232a4-kube-api-access-pv68p\") pod \"whisker-9c4977545-g698v\" (UID: \"1edd65d8-b5e4-447f-a4cd-2de7f77232a4\") " pod="calico-system/whisker-9c4977545-g698v" Jan 17 00:03:27.465050 kubelet[2574]: I0117 00:03:27.464907 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1edd65d8-b5e4-447f-a4cd-2de7f77232a4-whisker-backend-key-pair\") pod \"whisker-9c4977545-g698v\" (UID: \"1edd65d8-b5e4-447f-a4cd-2de7f77232a4\") " pod="calico-system/whisker-9c4977545-g698v" Jan 17 00:03:27.465050 kubelet[2574]: I0117 00:03:27.464936 2574 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1edd65d8-b5e4-447f-a4cd-2de7f77232a4-whisker-ca-bundle\") pod \"whisker-9c4977545-g698v\" (UID: \"1edd65d8-b5e4-447f-a4cd-2de7f77232a4\") " pod="calico-system/whisker-9c4977545-g698v" Jan 17 00:03:27.709277 containerd[1486]: time="2026-01-17T00:03:27.709058139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9c4977545-g698v,Uid:1edd65d8-b5e4-447f-a4cd-2de7f77232a4,Namespace:calico-system,Attempt:0,}" Jan 17 00:03:27.907357 systemd-networkd[1370]: cali039bc901c5e: Link UP Jan 17 00:03:27.911676 systemd-networkd[1370]: cali039bc901c5e: Gained carrier Jan 17 00:03:27.952597 containerd[1486]: 2026-01-17 00:03:27.749 [INFO][3850] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 00:03:27.952597 containerd[1486]: 2026-01-17 00:03:27.771 [INFO][3850] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--089d3b6582-k8s-whisker--9c4977545--g698v-eth0 whisker-9c4977545- calico-system 1edd65d8-b5e4-447f-a4cd-2de7f77232a4 931 0 2026-01-17 00:03:27 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:9c4977545 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-n-089d3b6582 whisker-9c4977545-g698v eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali039bc901c5e [] [] }} ContainerID="a310b12844f26e8eb8509ef946d9021f1b0c7dab2bc29eaa83652ca468e6bb05" Namespace="calico-system" Pod="whisker-9c4977545-g698v" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-whisker--9c4977545--g698v-" Jan 17 00:03:27.952597 containerd[1486]: 2026-01-17 00:03:27.771 [INFO][3850] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a310b12844f26e8eb8509ef946d9021f1b0c7dab2bc29eaa83652ca468e6bb05" Namespace="calico-system" Pod="whisker-9c4977545-g698v" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-whisker--9c4977545--g698v-eth0" Jan 17 00:03:27.952597 containerd[1486]: 2026-01-17 00:03:27.812 [INFO][3888] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a310b12844f26e8eb8509ef946d9021f1b0c7dab2bc29eaa83652ca468e6bb05" HandleID="k8s-pod-network.a310b12844f26e8eb8509ef946d9021f1b0c7dab2bc29eaa83652ca468e6bb05" Workload="ci--4081--3--6--n--089d3b6582-k8s-whisker--9c4977545--g698v-eth0" Jan 17 00:03:27.952597 containerd[1486]: 2026-01-17 00:03:27.813 [INFO][3888] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a310b12844f26e8eb8509ef946d9021f1b0c7dab2bc29eaa83652ca468e6bb05" HandleID="k8s-pod-network.a310b12844f26e8eb8509ef946d9021f1b0c7dab2bc29eaa83652ca468e6bb05" Workload="ci--4081--3--6--n--089d3b6582-k8s-whisker--9c4977545--g698v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-089d3b6582", "pod":"whisker-9c4977545-g698v", "timestamp":"2026-01-17 00:03:27.812964594 +0000 UTC"}, Hostname:"ci-4081-3-6-n-089d3b6582", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:03:27.952597 containerd[1486]: 2026-01-17 00:03:27.813 [INFO][3888] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:27.952597 containerd[1486]: 2026-01-17 00:03:27.813 [INFO][3888] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:27.952597 containerd[1486]: 2026-01-17 00:03:27.813 [INFO][3888] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-089d3b6582' Jan 17 00:03:27.952597 containerd[1486]: 2026-01-17 00:03:27.826 [INFO][3888] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a310b12844f26e8eb8509ef946d9021f1b0c7dab2bc29eaa83652ca468e6bb05" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:27.952597 containerd[1486]: 2026-01-17 00:03:27.843 [INFO][3888] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:27.952597 containerd[1486]: 2026-01-17 00:03:27.860 [INFO][3888] ipam/ipam.go 511: Trying affinity for 192.168.83.0/26 host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:27.952597 containerd[1486]: 2026-01-17 00:03:27.866 [INFO][3888] ipam/ipam.go 158: Attempting to load block cidr=192.168.83.0/26 host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:27.952597 containerd[1486]: 2026-01-17 00:03:27.870 [INFO][3888] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.83.0/26 host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:27.952597 containerd[1486]: 2026-01-17 00:03:27.870 [INFO][3888] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.83.0/26 handle="k8s-pod-network.a310b12844f26e8eb8509ef946d9021f1b0c7dab2bc29eaa83652ca468e6bb05" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:27.952597 containerd[1486]: 2026-01-17 00:03:27.872 [INFO][3888] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a310b12844f26e8eb8509ef946d9021f1b0c7dab2bc29eaa83652ca468e6bb05 Jan 17 00:03:27.952597 containerd[1486]: 2026-01-17 00:03:27.883 [INFO][3888] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.83.0/26 handle="k8s-pod-network.a310b12844f26e8eb8509ef946d9021f1b0c7dab2bc29eaa83652ca468e6bb05" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:27.952597 containerd[1486]: 2026-01-17 00:03:27.893 [INFO][3888] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.83.1/26] block=192.168.83.0/26 handle="k8s-pod-network.a310b12844f26e8eb8509ef946d9021f1b0c7dab2bc29eaa83652ca468e6bb05" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:27.952597 containerd[1486]: 2026-01-17 00:03:27.893 [INFO][3888] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.83.1/26] handle="k8s-pod-network.a310b12844f26e8eb8509ef946d9021f1b0c7dab2bc29eaa83652ca468e6bb05" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:27.952597 containerd[1486]: 2026-01-17 00:03:27.893 [INFO][3888] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:27.952597 containerd[1486]: 2026-01-17 00:03:27.893 [INFO][3888] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.83.1/26] IPv6=[] ContainerID="a310b12844f26e8eb8509ef946d9021f1b0c7dab2bc29eaa83652ca468e6bb05" HandleID="k8s-pod-network.a310b12844f26e8eb8509ef946d9021f1b0c7dab2bc29eaa83652ca468e6bb05" Workload="ci--4081--3--6--n--089d3b6582-k8s-whisker--9c4977545--g698v-eth0" Jan 17 00:03:27.955859 containerd[1486]: 2026-01-17 00:03:27.895 [INFO][3850] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a310b12844f26e8eb8509ef946d9021f1b0c7dab2bc29eaa83652ca468e6bb05" Namespace="calico-system" Pod="whisker-9c4977545-g698v" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-whisker--9c4977545--g698v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--089d3b6582-k8s-whisker--9c4977545--g698v-eth0", GenerateName:"whisker-9c4977545-", Namespace:"calico-system", SelfLink:"", UID:"1edd65d8-b5e4-447f-a4cd-2de7f77232a4", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 3, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9c4977545", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-089d3b6582", ContainerID:"", Pod:"whisker-9c4977545-g698v", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.83.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali039bc901c5e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:27.955859 containerd[1486]: 2026-01-17 00:03:27.895 [INFO][3850] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.83.1/32] ContainerID="a310b12844f26e8eb8509ef946d9021f1b0c7dab2bc29eaa83652ca468e6bb05" Namespace="calico-system" Pod="whisker-9c4977545-g698v" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-whisker--9c4977545--g698v-eth0" Jan 17 00:03:27.955859 containerd[1486]: 2026-01-17 00:03:27.896 [INFO][3850] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali039bc901c5e ContainerID="a310b12844f26e8eb8509ef946d9021f1b0c7dab2bc29eaa83652ca468e6bb05" Namespace="calico-system" Pod="whisker-9c4977545-g698v" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-whisker--9c4977545--g698v-eth0" Jan 17 00:03:27.955859 containerd[1486]: 2026-01-17 00:03:27.913 [INFO][3850] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a310b12844f26e8eb8509ef946d9021f1b0c7dab2bc29eaa83652ca468e6bb05" Namespace="calico-system" Pod="whisker-9c4977545-g698v" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-whisker--9c4977545--g698v-eth0" Jan 17 00:03:27.955859 containerd[1486]: 2026-01-17 00:03:27.916 [INFO][3850] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a310b12844f26e8eb8509ef946d9021f1b0c7dab2bc29eaa83652ca468e6bb05" Namespace="calico-system" Pod="whisker-9c4977545-g698v" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-whisker--9c4977545--g698v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--089d3b6582-k8s-whisker--9c4977545--g698v-eth0", GenerateName:"whisker-9c4977545-", Namespace:"calico-system", SelfLink:"", UID:"1edd65d8-b5e4-447f-a4cd-2de7f77232a4", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 3, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9c4977545", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-089d3b6582", ContainerID:"a310b12844f26e8eb8509ef946d9021f1b0c7dab2bc29eaa83652ca468e6bb05", Pod:"whisker-9c4977545-g698v", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.83.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali039bc901c5e", MAC:"32:5b:21:e5:27:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:27.955859 containerd[1486]: 2026-01-17 00:03:27.945 [INFO][3850] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a310b12844f26e8eb8509ef946d9021f1b0c7dab2bc29eaa83652ca468e6bb05" Namespace="calico-system" Pod="whisker-9c4977545-g698v" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-whisker--9c4977545--g698v-eth0" Jan 17 00:03:27.982195 containerd[1486]: time="2026-01-17T00:03:27.981782580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:03:27.982195 containerd[1486]: time="2026-01-17T00:03:27.981855180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:03:27.982195 containerd[1486]: time="2026-01-17T00:03:27.981872499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:27.982195 containerd[1486]: time="2026-01-17T00:03:27.981983939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:28.033169 systemd[1]: Started cri-containerd-a310b12844f26e8eb8509ef946d9021f1b0c7dab2bc29eaa83652ca468e6bb05.scope - libcontainer container a310b12844f26e8eb8509ef946d9021f1b0c7dab2bc29eaa83652ca468e6bb05. Jan 17 00:03:28.088373 containerd[1486]: time="2026-01-17T00:03:28.088017324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9c4977545-g698v,Uid:1edd65d8-b5e4-447f-a4cd-2de7f77232a4,Namespace:calico-system,Attempt:0,} returns sandbox id \"a310b12844f26e8eb8509ef946d9021f1b0c7dab2bc29eaa83652ca468e6bb05\"" Jan 17 00:03:28.092107 containerd[1486]: time="2026-01-17T00:03:28.092064713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:03:28.322170 systemd[1]: run-containerd-runc-k8s.io-82ba6b896debd124731a3ce7e70b90d937b6b1b308e740825f7c957618dce863-runc.yYJXSt.mount: Deactivated successfully. Jan 17 00:03:28.439657 containerd[1486]: time="2026-01-17T00:03:28.439600558Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:28.444693 containerd[1486]: time="2026-01-17T00:03:28.444605305Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:03:28.444870 containerd[1486]: time="2026-01-17T00:03:28.444748944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:03:28.445179 kubelet[2574]: E0117 00:03:28.445105 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:03:28.445179 kubelet[2574]: E0117 00:03:28.445174 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:03:28.447675 kubelet[2574]: E0117 00:03:28.447607 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-9c4977545-g698v_calico-system(1edd65d8-b5e4-447f-a4cd-2de7f77232a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:28.449065 containerd[1486]: time="2026-01-17T00:03:28.449000173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:03:28.464612 kernel: bpftool[4059]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 00:03:28.671654 systemd-networkd[1370]: vxlan.calico: Link UP Jan 17 00:03:28.671662 systemd-networkd[1370]: vxlan.calico: Gained carrier Jan 17 00:03:28.790168 containerd[1486]: time="2026-01-17T00:03:28.790104955Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:28.791641 containerd[1486]: time="2026-01-17T00:03:28.791578671Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:03:28.791785 containerd[1486]: time="2026-01-17T00:03:28.791705831Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:03:28.792052 kubelet[2574]: E0117 00:03:28.791989 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:03:28.792052 kubelet[2574]: E0117 00:03:28.792050 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:03:28.792452 kubelet[2574]: E0117 00:03:28.792136 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-9c4977545-g698v_calico-system(1edd65d8-b5e4-447f-a4cd-2de7f77232a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:28.792452 kubelet[2574]: E0117 00:03:28.792181 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9c4977545-g698v" podUID="1edd65d8-b5e4-447f-a4cd-2de7f77232a4" Jan 17 00:03:29.008775 kubelet[2574]: I0117 00:03:29.008283 2574 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ec5e3a5-4022-41f5-8198-9b4ac0d8306c" path="/var/lib/kubelet/pods/2ec5e3a5-4022-41f5-8198-9b4ac0d8306c/volumes" Jan 17 00:03:29.302376 kubelet[2574]: E0117 00:03:29.302239 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9c4977545-g698v" podUID="1edd65d8-b5e4-447f-a4cd-2de7f77232a4" Jan 17 00:03:29.323686 systemd-networkd[1370]: cali039bc901c5e: Gained IPv6LL Jan 17 00:03:30.539670 systemd-networkd[1370]: vxlan.calico: Gained IPv6LL Jan 17 00:03:31.007570 containerd[1486]: time="2026-01-17T00:03:31.005619415Z" level=info msg="StopPodSandbox for \"9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f\"" Jan 17 00:03:31.007570 containerd[1486]: time="2026-01-17T00:03:31.007433611Z" level=info msg="StopPodSandbox for \"ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564\"" Jan 17 00:03:31.012475 containerd[1486]: time="2026-01-17T00:03:31.010888523Z" level=info msg="StopPodSandbox for \"8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9\"" Jan 17 00:03:31.173678 containerd[1486]: 2026-01-17 00:03:31.095 [INFO][4164] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" Jan 17 00:03:31.173678 containerd[1486]: 2026-01-17 00:03:31.097 [INFO][4164] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" iface="eth0" netns="/var/run/netns/cni-6fd63095-4eed-73dd-777c-026f37060c15" Jan 17 00:03:31.173678 containerd[1486]: 2026-01-17 00:03:31.100 [INFO][4164] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" iface="eth0" netns="/var/run/netns/cni-6fd63095-4eed-73dd-777c-026f37060c15" Jan 17 00:03:31.173678 containerd[1486]: 2026-01-17 00:03:31.103 [INFO][4164] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" iface="eth0" netns="/var/run/netns/cni-6fd63095-4eed-73dd-777c-026f37060c15" Jan 17 00:03:31.173678 containerd[1486]: 2026-01-17 00:03:31.103 [INFO][4164] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" Jan 17 00:03:31.173678 containerd[1486]: 2026-01-17 00:03:31.103 [INFO][4164] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" Jan 17 00:03:31.173678 containerd[1486]: 2026-01-17 00:03:31.151 [INFO][4184] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" HandleID="k8s-pod-network.ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" Workload="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--nkbkp-eth0" Jan 17 00:03:31.173678 containerd[1486]: 2026-01-17 00:03:31.151 [INFO][4184] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:31.173678 containerd[1486]: 2026-01-17 00:03:31.151 [INFO][4184] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:31.173678 containerd[1486]: 2026-01-17 00:03:31.165 [WARNING][4184] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" HandleID="k8s-pod-network.ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" Workload="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--nkbkp-eth0" Jan 17 00:03:31.173678 containerd[1486]: 2026-01-17 00:03:31.165 [INFO][4184] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" HandleID="k8s-pod-network.ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" Workload="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--nkbkp-eth0" Jan 17 00:03:31.173678 containerd[1486]: 2026-01-17 00:03:31.168 [INFO][4184] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:31.173678 containerd[1486]: 2026-01-17 00:03:31.171 [INFO][4164] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" Jan 17 00:03:31.176969 containerd[1486]: time="2026-01-17T00:03:31.173885754Z" level=info msg="TearDown network for sandbox \"ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564\" successfully" Jan 17 00:03:31.176969 containerd[1486]: time="2026-01-17T00:03:31.173918074Z" level=info msg="StopPodSandbox for \"ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564\" returns successfully" Jan 17 00:03:31.176277 systemd[1]: run-netns-cni\x2d6fd63095\x2d4eed\x2d73dd\x2d777c\x2d026f37060c15.mount: Deactivated successfully. Jan 17 00:03:31.182012 containerd[1486]: time="2026-01-17T00:03:31.180898338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nkbkp,Uid:4cd10ffc-3ab7-4de4-a249-8f0e6fd50b38,Namespace:kube-system,Attempt:1,}" Jan 17 00:03:31.204800 containerd[1486]: 2026-01-17 00:03:31.112 [INFO][4169] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" Jan 17 00:03:31.204800 containerd[1486]: 2026-01-17 00:03:31.114 [INFO][4169] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" iface="eth0" netns="/var/run/netns/cni-a3ffbc0c-fbce-e96b-2472-a4803d46b5f3" Jan 17 00:03:31.204800 containerd[1486]: 2026-01-17 00:03:31.114 [INFO][4169] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" iface="eth0" netns="/var/run/netns/cni-a3ffbc0c-fbce-e96b-2472-a4803d46b5f3" Jan 17 00:03:31.204800 containerd[1486]: 2026-01-17 00:03:31.114 [INFO][4169] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" iface="eth0" netns="/var/run/netns/cni-a3ffbc0c-fbce-e96b-2472-a4803d46b5f3" Jan 17 00:03:31.204800 containerd[1486]: 2026-01-17 00:03:31.114 [INFO][4169] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" Jan 17 00:03:31.204800 containerd[1486]: 2026-01-17 00:03:31.114 [INFO][4169] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" Jan 17 00:03:31.204800 containerd[1486]: 2026-01-17 00:03:31.156 [INFO][4193] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" HandleID="k8s-pod-network.8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--ghv47-eth0" Jan 17 00:03:31.204800 containerd[1486]: 2026-01-17 00:03:31.156 [INFO][4193] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:31.204800 containerd[1486]: 2026-01-17 00:03:31.168 [INFO][4193] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:31.204800 containerd[1486]: 2026-01-17 00:03:31.190 [WARNING][4193] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" HandleID="k8s-pod-network.8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--ghv47-eth0" Jan 17 00:03:31.204800 containerd[1486]: 2026-01-17 00:03:31.190 [INFO][4193] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" HandleID="k8s-pod-network.8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--ghv47-eth0" Jan 17 00:03:31.204800 containerd[1486]: 2026-01-17 00:03:31.193 [INFO][4193] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:31.204800 containerd[1486]: 2026-01-17 00:03:31.199 [INFO][4169] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" Jan 17 00:03:31.209852 containerd[1486]: time="2026-01-17T00:03:31.209616833Z" level=info msg="TearDown network for sandbox \"8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9\" successfully" Jan 17 00:03:31.209852 containerd[1486]: time="2026-01-17T00:03:31.209652873Z" level=info msg="StopPodSandbox for \"8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9\" returns successfully" Jan 17 00:03:31.218323 systemd[1]: run-netns-cni\x2da3ffbc0c\x2dfbce\x2de96b\x2d2472\x2da4803d46b5f3.mount: Deactivated successfully. Jan 17 00:03:31.220560 containerd[1486]: time="2026-01-17T00:03:31.220391009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-798d7c56dc-ghv47,Uid:e2865d0a-d4d2-402d-89fc-69d90c7c76b9,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:03:31.241057 containerd[1486]: 2026-01-17 00:03:31.102 [INFO][4165] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" Jan 17 00:03:31.241057 containerd[1486]: 2026-01-17 00:03:31.103 [INFO][4165] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" iface="eth0" netns="/var/run/netns/cni-1291e705-62a5-343b-751d-dfbcac197777" Jan 17 00:03:31.241057 containerd[1486]: 2026-01-17 00:03:31.104 [INFO][4165] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" iface="eth0" netns="/var/run/netns/cni-1291e705-62a5-343b-751d-dfbcac197777" Jan 17 00:03:31.241057 containerd[1486]: 2026-01-17 00:03:31.106 [INFO][4165] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" iface="eth0" netns="/var/run/netns/cni-1291e705-62a5-343b-751d-dfbcac197777" Jan 17 00:03:31.241057 containerd[1486]: 2026-01-17 00:03:31.106 [INFO][4165] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" Jan 17 00:03:31.241057 containerd[1486]: 2026-01-17 00:03:31.106 [INFO][4165] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" Jan 17 00:03:31.241057 containerd[1486]: 2026-01-17 00:03:31.161 [INFO][4186] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" HandleID="k8s-pod-network.9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--6ghq5-eth0" Jan 17 00:03:31.241057 containerd[1486]: 2026-01-17 00:03:31.162 [INFO][4186] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:31.241057 containerd[1486]: 2026-01-17 00:03:31.193 [INFO][4186] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:31.241057 containerd[1486]: 2026-01-17 00:03:31.224 [WARNING][4186] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" HandleID="k8s-pod-network.9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--6ghq5-eth0" Jan 17 00:03:31.241057 containerd[1486]: 2026-01-17 00:03:31.224 [INFO][4186] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" HandleID="k8s-pod-network.9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--6ghq5-eth0" Jan 17 00:03:31.241057 containerd[1486]: 2026-01-17 00:03:31.228 [INFO][4186] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:31.241057 containerd[1486]: 2026-01-17 00:03:31.234 [INFO][4165] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" Jan 17 00:03:31.242590 containerd[1486]: time="2026-01-17T00:03:31.241494361Z" level=info msg="TearDown network for sandbox \"9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f\" successfully" Jan 17 00:03:31.242590 containerd[1486]: time="2026-01-17T00:03:31.241700921Z" level=info msg="StopPodSandbox for \"9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f\" returns successfully" Jan 17 00:03:31.261645 containerd[1486]: time="2026-01-17T00:03:31.261201036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-798d7c56dc-6ghq5,Uid:3a9d9fee-4b98-43fe-862d-a1e26e86f2ee,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:03:31.442051 systemd-networkd[1370]: cali84b86588260: Link UP Jan 17 00:03:31.443973 systemd-networkd[1370]: cali84b86588260: Gained carrier Jan 17 00:03:31.469418 containerd[1486]: 2026-01-17 00:03:31.292 [INFO][4204] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--nkbkp-eth0 coredns-66bc5c9577- kube-system 4cd10ffc-3ab7-4de4-a249-8f0e6fd50b38 961 0 2026-01-17 00:02:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-089d3b6582 coredns-66bc5c9577-nkbkp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali84b86588260 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="74066e44a8d02c44107d8fc396c6160fcd9706534acadaf0d908fa6cdeb46298" Namespace="kube-system" Pod="coredns-66bc5c9577-nkbkp" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--nkbkp-" Jan 17 00:03:31.469418 containerd[1486]: 2026-01-17 00:03:31.292 [INFO][4204] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="74066e44a8d02c44107d8fc396c6160fcd9706534acadaf0d908fa6cdeb46298" Namespace="kube-system" Pod="coredns-66bc5c9577-nkbkp" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--nkbkp-eth0" Jan 17 00:03:31.469418 containerd[1486]: 2026-01-17 00:03:31.351 [INFO][4228] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="74066e44a8d02c44107d8fc396c6160fcd9706534acadaf0d908fa6cdeb46298" HandleID="k8s-pod-network.74066e44a8d02c44107d8fc396c6160fcd9706534acadaf0d908fa6cdeb46298" Workload="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--nkbkp-eth0" Jan 17 00:03:31.469418 containerd[1486]: 2026-01-17 00:03:31.352 [INFO][4228] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="74066e44a8d02c44107d8fc396c6160fcd9706534acadaf0d908fa6cdeb46298" HandleID="k8s-pod-network.74066e44a8d02c44107d8fc396c6160fcd9706534acadaf0d908fa6cdeb46298" Workload="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--nkbkp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d37c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-089d3b6582", "pod":"coredns-66bc5c9577-nkbkp", "timestamp":"2026-01-17 00:03:31.351587672 +0000 UTC"}, Hostname:"ci-4081-3-6-n-089d3b6582", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:03:31.469418 containerd[1486]: 2026-01-17 00:03:31.352 [INFO][4228] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:31.469418 containerd[1486]: 2026-01-17 00:03:31.352 [INFO][4228] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:31.469418 containerd[1486]: 2026-01-17 00:03:31.352 [INFO][4228] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-089d3b6582' Jan 17 00:03:31.469418 containerd[1486]: 2026-01-17 00:03:31.371 [INFO][4228] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.74066e44a8d02c44107d8fc396c6160fcd9706534acadaf0d908fa6cdeb46298" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:31.469418 containerd[1486]: 2026-01-17 00:03:31.378 [INFO][4228] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:31.469418 containerd[1486]: 2026-01-17 00:03:31.387 [INFO][4228] ipam/ipam.go 511: Trying affinity for 192.168.83.0/26 host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:31.469418 containerd[1486]: 2026-01-17 00:03:31.392 [INFO][4228] ipam/ipam.go 158: Attempting to load block cidr=192.168.83.0/26 host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:31.469418 containerd[1486]: 2026-01-17 00:03:31.398 [INFO][4228] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.83.0/26 host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:31.469418 containerd[1486]: 2026-01-17 00:03:31.398 [INFO][4228] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.83.0/26 handle="k8s-pod-network.74066e44a8d02c44107d8fc396c6160fcd9706534acadaf0d908fa6cdeb46298" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:31.469418 containerd[1486]: 2026-01-17 00:03:31.403 [INFO][4228] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.74066e44a8d02c44107d8fc396c6160fcd9706534acadaf0d908fa6cdeb46298 Jan 17 00:03:31.469418 containerd[1486]: 2026-01-17 00:03:31.414 [INFO][4228] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.83.0/26 handle="k8s-pod-network.74066e44a8d02c44107d8fc396c6160fcd9706534acadaf0d908fa6cdeb46298" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:31.469418 containerd[1486]: 2026-01-17 00:03:31.425 [INFO][4228] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.83.2/26] block=192.168.83.0/26 handle="k8s-pod-network.74066e44a8d02c44107d8fc396c6160fcd9706534acadaf0d908fa6cdeb46298" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:31.469418 containerd[1486]: 2026-01-17 00:03:31.427 [INFO][4228] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.83.2/26] handle="k8s-pod-network.74066e44a8d02c44107d8fc396c6160fcd9706534acadaf0d908fa6cdeb46298" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:31.469418 containerd[1486]: 2026-01-17 00:03:31.427 [INFO][4228] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:31.469418 containerd[1486]: 2026-01-17 00:03:31.427 [INFO][4228] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.83.2/26] IPv6=[] ContainerID="74066e44a8d02c44107d8fc396c6160fcd9706534acadaf0d908fa6cdeb46298" HandleID="k8s-pod-network.74066e44a8d02c44107d8fc396c6160fcd9706534acadaf0d908fa6cdeb46298" Workload="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--nkbkp-eth0" Jan 17 00:03:31.470456 containerd[1486]: 2026-01-17 00:03:31.433 [INFO][4204] cni-plugin/k8s.go 418: Populated endpoint ContainerID="74066e44a8d02c44107d8fc396c6160fcd9706534acadaf0d908fa6cdeb46298" Namespace="kube-system" Pod="coredns-66bc5c9577-nkbkp" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--nkbkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--nkbkp-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4cd10ffc-3ab7-4de4-a249-8f0e6fd50b38", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-089d3b6582", ContainerID:"", Pod:"coredns-66bc5c9577-nkbkp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84b86588260", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:31.470456 containerd[1486]: 2026-01-17 00:03:31.434 [INFO][4204] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.83.2/32] ContainerID="74066e44a8d02c44107d8fc396c6160fcd9706534acadaf0d908fa6cdeb46298" Namespace="kube-system" Pod="coredns-66bc5c9577-nkbkp" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--nkbkp-eth0" Jan 17 00:03:31.470456 containerd[1486]: 2026-01-17 00:03:31.434 [INFO][4204] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali84b86588260 ContainerID="74066e44a8d02c44107d8fc396c6160fcd9706534acadaf0d908fa6cdeb46298" Namespace="kube-system" Pod="coredns-66bc5c9577-nkbkp" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--nkbkp-eth0" Jan 17 00:03:31.470456 containerd[1486]: 2026-01-17 00:03:31.445 [INFO][4204] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="74066e44a8d02c44107d8fc396c6160fcd9706534acadaf0d908fa6cdeb46298" Namespace="kube-system" Pod="coredns-66bc5c9577-nkbkp" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--nkbkp-eth0" Jan 17 00:03:31.470456 containerd[1486]: 2026-01-17 00:03:31.446 [INFO][4204] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="74066e44a8d02c44107d8fc396c6160fcd9706534acadaf0d908fa6cdeb46298" Namespace="kube-system" Pod="coredns-66bc5c9577-nkbkp" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--nkbkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--nkbkp-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4cd10ffc-3ab7-4de4-a249-8f0e6fd50b38", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-089d3b6582", ContainerID:"74066e44a8d02c44107d8fc396c6160fcd9706534acadaf0d908fa6cdeb46298", Pod:"coredns-66bc5c9577-nkbkp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84b86588260", MAC:"f2:71:05:9e:f4:a6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:31.470721 containerd[1486]: 2026-01-17 00:03:31.464 [INFO][4204] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="74066e44a8d02c44107d8fc396c6160fcd9706534acadaf0d908fa6cdeb46298" Namespace="kube-system" Pod="coredns-66bc5c9577-nkbkp" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--nkbkp-eth0" Jan 17 00:03:31.501851 containerd[1486]: time="2026-01-17T00:03:31.501503372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:03:31.501851 containerd[1486]: time="2026-01-17T00:03:31.501659252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:03:31.501851 containerd[1486]: time="2026-01-17T00:03:31.501675612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:31.501851 containerd[1486]: time="2026-01-17T00:03:31.501769932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:31.533271 systemd[1]: Started cri-containerd-74066e44a8d02c44107d8fc396c6160fcd9706534acadaf0d908fa6cdeb46298.scope - libcontainer container 74066e44a8d02c44107d8fc396c6160fcd9706534acadaf0d908fa6cdeb46298. Jan 17 00:03:31.589163 systemd-networkd[1370]: calid3059b58743: Link UP Jan 17 00:03:31.591105 systemd-networkd[1370]: calid3059b58743: Gained carrier Jan 17 00:03:31.640587 containerd[1486]: 2026-01-17 00:03:31.316 [INFO][4214] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--ghv47-eth0 calico-apiserver-798d7c56dc- calico-apiserver e2865d0a-d4d2-402d-89fc-69d90c7c76b9 963 0 2026-01-17 00:02:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:798d7c56dc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-089d3b6582 calico-apiserver-798d7c56dc-ghv47 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid3059b58743 [] [] }} ContainerID="743163d9593f30b61614d5fee8e8850ebf304e0a35aec9d150fa94c58ac85d90" Namespace="calico-apiserver" Pod="calico-apiserver-798d7c56dc-ghv47" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--ghv47-" Jan 17 00:03:31.640587 containerd[1486]: 2026-01-17 00:03:31.317 [INFO][4214] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="743163d9593f30b61614d5fee8e8850ebf304e0a35aec9d150fa94c58ac85d90" Namespace="calico-apiserver" Pod="calico-apiserver-798d7c56dc-ghv47" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--ghv47-eth0" Jan 17 00:03:31.640587 containerd[1486]: 2026-01-17 00:03:31.366 [INFO][4241] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="743163d9593f30b61614d5fee8e8850ebf304e0a35aec9d150fa94c58ac85d90" HandleID="k8s-pod-network.743163d9593f30b61614d5fee8e8850ebf304e0a35aec9d150fa94c58ac85d90" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--ghv47-eth0" Jan 17 00:03:31.640587 containerd[1486]: 2026-01-17 00:03:31.367 [INFO][4241] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="743163d9593f30b61614d5fee8e8850ebf304e0a35aec9d150fa94c58ac85d90" HandleID="k8s-pod-network.743163d9593f30b61614d5fee8e8850ebf304e0a35aec9d150fa94c58ac85d90" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--ghv47-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-089d3b6582", "pod":"calico-apiserver-798d7c56dc-ghv47", "timestamp":"2026-01-17 00:03:31.366492158 +0000 UTC"}, Hostname:"ci-4081-3-6-n-089d3b6582", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:03:31.640587 containerd[1486]: 2026-01-17 00:03:31.367 [INFO][4241] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:31.640587 containerd[1486]: 2026-01-17 00:03:31.427 [INFO][4241] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:31.640587 containerd[1486]: 2026-01-17 00:03:31.428 [INFO][4241] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-089d3b6582' Jan 17 00:03:31.640587 containerd[1486]: 2026-01-17 00:03:31.473 [INFO][4241] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.743163d9593f30b61614d5fee8e8850ebf304e0a35aec9d150fa94c58ac85d90" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:31.640587 containerd[1486]: 2026-01-17 00:03:31.484 [INFO][4241] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:31.640587 containerd[1486]: 2026-01-17 00:03:31.493 [INFO][4241] ipam/ipam.go 511: Trying affinity for 192.168.83.0/26 host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:31.640587 containerd[1486]: 2026-01-17 00:03:31.497 [INFO][4241] ipam/ipam.go 158: Attempting to load block cidr=192.168.83.0/26 host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:31.640587 containerd[1486]: 2026-01-17 00:03:31.506 [INFO][4241] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.83.0/26 host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:31.640587 containerd[1486]: 2026-01-17 00:03:31.506 [INFO][4241] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.83.0/26 handle="k8s-pod-network.743163d9593f30b61614d5fee8e8850ebf304e0a35aec9d150fa94c58ac85d90" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:31.640587 containerd[1486]: 2026-01-17 00:03:31.515 [INFO][4241] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.743163d9593f30b61614d5fee8e8850ebf304e0a35aec9d150fa94c58ac85d90 Jan 17 00:03:31.640587 containerd[1486]: 2026-01-17 00:03:31.530 [INFO][4241] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.83.0/26 handle="k8s-pod-network.743163d9593f30b61614d5fee8e8850ebf304e0a35aec9d150fa94c58ac85d90" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:31.640587 containerd[1486]: 2026-01-17 00:03:31.544 [INFO][4241] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.83.3/26] block=192.168.83.0/26 handle="k8s-pod-network.743163d9593f30b61614d5fee8e8850ebf304e0a35aec9d150fa94c58ac85d90" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:31.640587 containerd[1486]: 2026-01-17 00:03:31.544 [INFO][4241] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.83.3/26] handle="k8s-pod-network.743163d9593f30b61614d5fee8e8850ebf304e0a35aec9d150fa94c58ac85d90" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:31.640587 containerd[1486]: 2026-01-17 00:03:31.545 [INFO][4241] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:31.640587 containerd[1486]: 2026-01-17 00:03:31.545 [INFO][4241] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.83.3/26] IPv6=[] ContainerID="743163d9593f30b61614d5fee8e8850ebf304e0a35aec9d150fa94c58ac85d90" HandleID="k8s-pod-network.743163d9593f30b61614d5fee8e8850ebf304e0a35aec9d150fa94c58ac85d90" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--ghv47-eth0" Jan 17 00:03:31.641388 containerd[1486]: 2026-01-17 00:03:31.558 [INFO][4214] cni-plugin/k8s.go 418: Populated endpoint ContainerID="743163d9593f30b61614d5fee8e8850ebf304e0a35aec9d150fa94c58ac85d90" Namespace="calico-apiserver" Pod="calico-apiserver-798d7c56dc-ghv47" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--ghv47-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--ghv47-eth0", GenerateName:"calico-apiserver-798d7c56dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"e2865d0a-d4d2-402d-89fc-69d90c7c76b9", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"798d7c56dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-089d3b6582", ContainerID:"", Pod:"calico-apiserver-798d7c56dc-ghv47", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid3059b58743", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:31.641388 containerd[1486]: 2026-01-17 00:03:31.560 [INFO][4214] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.83.3/32] ContainerID="743163d9593f30b61614d5fee8e8850ebf304e0a35aec9d150fa94c58ac85d90" Namespace="calico-apiserver" Pod="calico-apiserver-798d7c56dc-ghv47" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--ghv47-eth0" Jan 17 00:03:31.641388 containerd[1486]: 2026-01-17 00:03:31.560 [INFO][4214] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid3059b58743 ContainerID="743163d9593f30b61614d5fee8e8850ebf304e0a35aec9d150fa94c58ac85d90" Namespace="calico-apiserver" Pod="calico-apiserver-798d7c56dc-ghv47" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--ghv47-eth0" Jan 17 00:03:31.641388 containerd[1486]: 2026-01-17 00:03:31.590 [INFO][4214] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="743163d9593f30b61614d5fee8e8850ebf304e0a35aec9d150fa94c58ac85d90" Namespace="calico-apiserver" Pod="calico-apiserver-798d7c56dc-ghv47" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--ghv47-eth0" Jan 17 00:03:31.641388 containerd[1486]: 2026-01-17 00:03:31.592 [INFO][4214] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="743163d9593f30b61614d5fee8e8850ebf304e0a35aec9d150fa94c58ac85d90" Namespace="calico-apiserver" Pod="calico-apiserver-798d7c56dc-ghv47" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--ghv47-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--ghv47-eth0", GenerateName:"calico-apiserver-798d7c56dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"e2865d0a-d4d2-402d-89fc-69d90c7c76b9", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"798d7c56dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-089d3b6582", ContainerID:"743163d9593f30b61614d5fee8e8850ebf304e0a35aec9d150fa94c58ac85d90", Pod:"calico-apiserver-798d7c56dc-ghv47", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid3059b58743", MAC:"66:88:ed:14:d4:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:31.641388 containerd[1486]: 2026-01-17 00:03:31.633 [INFO][4214] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="743163d9593f30b61614d5fee8e8850ebf304e0a35aec9d150fa94c58ac85d90" Namespace="calico-apiserver" Pod="calico-apiserver-798d7c56dc-ghv47" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--ghv47-eth0" Jan 17 00:03:31.650748 containerd[1486]: time="2026-01-17T00:03:31.650494315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nkbkp,Uid:4cd10ffc-3ab7-4de4-a249-8f0e6fd50b38,Namespace:kube-system,Attempt:1,} returns sandbox id \"74066e44a8d02c44107d8fc396c6160fcd9706534acadaf0d908fa6cdeb46298\"" Jan 17 00:03:31.692579 containerd[1486]: time="2026-01-17T00:03:31.692484740Z" level=info msg="CreateContainer within sandbox \"74066e44a8d02c44107d8fc396c6160fcd9706534acadaf0d908fa6cdeb46298\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:03:31.728558 containerd[1486]: time="2026-01-17T00:03:31.724730347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:03:31.728558 containerd[1486]: time="2026-01-17T00:03:31.725081306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:03:31.728558 containerd[1486]: time="2026-01-17T00:03:31.725094266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:31.732817 containerd[1486]: time="2026-01-17T00:03:31.732705569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:31.738974 containerd[1486]: time="2026-01-17T00:03:31.736550160Z" level=info msg="CreateContainer within sandbox \"74066e44a8d02c44107d8fc396c6160fcd9706534acadaf0d908fa6cdeb46298\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"74e02d829f0ff003ee8f808f64e8502b21258d5f582251548311b72c17fd7ca8\"" Jan 17 00:03:31.738974 containerd[1486]: time="2026-01-17T00:03:31.737857877Z" level=info msg="StartContainer for \"74e02d829f0ff003ee8f808f64e8502b21258d5f582251548311b72c17fd7ca8\"" Jan 17 00:03:31.768375 systemd-networkd[1370]: calif46e10d7769: Link UP Jan 17 00:03:31.770445 systemd-networkd[1370]: calif46e10d7769: Gained carrier Jan 17 00:03:31.792613 systemd[1]: Started cri-containerd-74e02d829f0ff003ee8f808f64e8502b21258d5f582251548311b72c17fd7ca8.scope - libcontainer container 74e02d829f0ff003ee8f808f64e8502b21258d5f582251548311b72c17fd7ca8. Jan 17 00:03:31.815767 systemd[1]: Started cri-containerd-743163d9593f30b61614d5fee8e8850ebf304e0a35aec9d150fa94c58ac85d90.scope - libcontainer container 743163d9593f30b61614d5fee8e8850ebf304e0a35aec9d150fa94c58ac85d90. Jan 17 00:03:31.830261 containerd[1486]: 2026-01-17 00:03:31.389 [INFO][4236] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--6ghq5-eth0 calico-apiserver-798d7c56dc- calico-apiserver 3a9d9fee-4b98-43fe-862d-a1e26e86f2ee 962 0 2026-01-17 00:02:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:798d7c56dc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-089d3b6582 calico-apiserver-798d7c56dc-6ghq5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif46e10d7769 [] [] }} ContainerID="bf0b625eb76ca0e6e7c4d4e3c8e3edae64889f8b9a235ba45d15ba900fc35738" Namespace="calico-apiserver" Pod="calico-apiserver-798d7c56dc-6ghq5" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--6ghq5-" Jan 17 00:03:31.830261 containerd[1486]: 2026-01-17 00:03:31.390 [INFO][4236] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bf0b625eb76ca0e6e7c4d4e3c8e3edae64889f8b9a235ba45d15ba900fc35738" Namespace="calico-apiserver" Pod="calico-apiserver-798d7c56dc-6ghq5" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--6ghq5-eth0" Jan 17 00:03:31.830261 containerd[1486]: 2026-01-17 00:03:31.440 [INFO][4254] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bf0b625eb76ca0e6e7c4d4e3c8e3edae64889f8b9a235ba45d15ba900fc35738" HandleID="k8s-pod-network.bf0b625eb76ca0e6e7c4d4e3c8e3edae64889f8b9a235ba45d15ba900fc35738" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--6ghq5-eth0" Jan 17 00:03:31.830261 containerd[1486]: 2026-01-17 00:03:31.440 [INFO][4254] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bf0b625eb76ca0e6e7c4d4e3c8e3edae64889f8b9a235ba45d15ba900fc35738" HandleID="k8s-pod-network.bf0b625eb76ca0e6e7c4d4e3c8e3edae64889f8b9a235ba45d15ba900fc35738" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--6ghq5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c0ff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-089d3b6582", "pod":"calico-apiserver-798d7c56dc-6ghq5", "timestamp":"2026-01-17 00:03:31.440380271 +0000 UTC"}, Hostname:"ci-4081-3-6-n-089d3b6582", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:03:31.830261 containerd[1486]: 2026-01-17 00:03:31.441 [INFO][4254] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:31.830261 containerd[1486]: 2026-01-17 00:03:31.545 [INFO][4254] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:31.830261 containerd[1486]: 2026-01-17 00:03:31.545 [INFO][4254] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-089d3b6582' Jan 17 00:03:31.830261 containerd[1486]: 2026-01-17 00:03:31.578 [INFO][4254] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bf0b625eb76ca0e6e7c4d4e3c8e3edae64889f8b9a235ba45d15ba900fc35738" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:31.830261 containerd[1486]: 2026-01-17 00:03:31.612 [INFO][4254] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:31.830261 containerd[1486]: 2026-01-17 00:03:31.645 [INFO][4254] ipam/ipam.go 511: Trying affinity for 192.168.83.0/26 host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:31.830261 containerd[1486]: 2026-01-17 00:03:31.653 [INFO][4254] ipam/ipam.go 158: Attempting to load block cidr=192.168.83.0/26 host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:31.830261 containerd[1486]: 2026-01-17 00:03:31.658 [INFO][4254] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.83.0/26 host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:31.830261 containerd[1486]: 2026-01-17 00:03:31.658 [INFO][4254] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.83.0/26 handle="k8s-pod-network.bf0b625eb76ca0e6e7c4d4e3c8e3edae64889f8b9a235ba45d15ba900fc35738" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:31.830261 containerd[1486]: 2026-01-17 00:03:31.668 [INFO][4254] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bf0b625eb76ca0e6e7c4d4e3c8e3edae64889f8b9a235ba45d15ba900fc35738 Jan 17 00:03:31.830261 containerd[1486]: 2026-01-17 00:03:31.690 [INFO][4254] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.83.0/26 handle="k8s-pod-network.bf0b625eb76ca0e6e7c4d4e3c8e3edae64889f8b9a235ba45d15ba900fc35738" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:31.830261 containerd[1486]: 2026-01-17 00:03:31.715 [INFO][4254] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.83.4/26] block=192.168.83.0/26 handle="k8s-pod-network.bf0b625eb76ca0e6e7c4d4e3c8e3edae64889f8b9a235ba45d15ba900fc35738" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:31.830261 containerd[1486]: 2026-01-17 00:03:31.715 [INFO][4254] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.83.4/26] handle="k8s-pod-network.bf0b625eb76ca0e6e7c4d4e3c8e3edae64889f8b9a235ba45d15ba900fc35738" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:31.830261 containerd[1486]: 2026-01-17 00:03:31.715 [INFO][4254] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:31.830261 containerd[1486]: 2026-01-17 00:03:31.715 [INFO][4254] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.83.4/26] IPv6=[] ContainerID="bf0b625eb76ca0e6e7c4d4e3c8e3edae64889f8b9a235ba45d15ba900fc35738" HandleID="k8s-pod-network.bf0b625eb76ca0e6e7c4d4e3c8e3edae64889f8b9a235ba45d15ba900fc35738" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--6ghq5-eth0" Jan 17 00:03:31.833847 containerd[1486]: 2026-01-17 00:03:31.724 [INFO][4236] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bf0b625eb76ca0e6e7c4d4e3c8e3edae64889f8b9a235ba45d15ba900fc35738" Namespace="calico-apiserver" Pod="calico-apiserver-798d7c56dc-6ghq5" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--6ghq5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--6ghq5-eth0", GenerateName:"calico-apiserver-798d7c56dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"3a9d9fee-4b98-43fe-862d-a1e26e86f2ee", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"798d7c56dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-089d3b6582", ContainerID:"", Pod:"calico-apiserver-798d7c56dc-6ghq5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif46e10d7769", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:31.833847 containerd[1486]: 2026-01-17 00:03:31.725 [INFO][4236] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.83.4/32] ContainerID="bf0b625eb76ca0e6e7c4d4e3c8e3edae64889f8b9a235ba45d15ba900fc35738" Namespace="calico-apiserver" Pod="calico-apiserver-798d7c56dc-6ghq5" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--6ghq5-eth0" Jan 17 00:03:31.833847 containerd[1486]: 2026-01-17 00:03:31.725 [INFO][4236] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif46e10d7769 ContainerID="bf0b625eb76ca0e6e7c4d4e3c8e3edae64889f8b9a235ba45d15ba900fc35738" Namespace="calico-apiserver" Pod="calico-apiserver-798d7c56dc-6ghq5" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--6ghq5-eth0" Jan 17 00:03:31.833847 containerd[1486]: 2026-01-17 00:03:31.773 [INFO][4236] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bf0b625eb76ca0e6e7c4d4e3c8e3edae64889f8b9a235ba45d15ba900fc35738" Namespace="calico-apiserver" Pod="calico-apiserver-798d7c56dc-6ghq5" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--6ghq5-eth0" Jan 17 00:03:31.833847 containerd[1486]: 2026-01-17 00:03:31.777 [INFO][4236] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bf0b625eb76ca0e6e7c4d4e3c8e3edae64889f8b9a235ba45d15ba900fc35738" Namespace="calico-apiserver" Pod="calico-apiserver-798d7c56dc-6ghq5" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--6ghq5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--6ghq5-eth0", GenerateName:"calico-apiserver-798d7c56dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"3a9d9fee-4b98-43fe-862d-a1e26e86f2ee", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"798d7c56dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-089d3b6582", ContainerID:"bf0b625eb76ca0e6e7c4d4e3c8e3edae64889f8b9a235ba45d15ba900fc35738", Pod:"calico-apiserver-798d7c56dc-6ghq5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif46e10d7769", MAC:"da:a9:a2:55:ad:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:31.833847 containerd[1486]: 2026-01-17 00:03:31.815 [INFO][4236] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bf0b625eb76ca0e6e7c4d4e3c8e3edae64889f8b9a235ba45d15ba900fc35738" Namespace="calico-apiserver" Pod="calico-apiserver-798d7c56dc-6ghq5" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--6ghq5-eth0" Jan 17 00:03:31.878294 containerd[1486]: time="2026-01-17T00:03:31.878149480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:03:31.878294 containerd[1486]: time="2026-01-17T00:03:31.878234920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:03:31.878294 containerd[1486]: time="2026-01-17T00:03:31.878250080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:31.887540 containerd[1486]: time="2026-01-17T00:03:31.883554068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:31.902266 containerd[1486]: time="2026-01-17T00:03:31.902199585Z" level=info msg="StartContainer for \"74e02d829f0ff003ee8f808f64e8502b21258d5f582251548311b72c17fd7ca8\" returns successfully" Jan 17 00:03:31.922021 systemd[1]: Started cri-containerd-bf0b625eb76ca0e6e7c4d4e3c8e3edae64889f8b9a235ba45d15ba900fc35738.scope - libcontainer container bf0b625eb76ca0e6e7c4d4e3c8e3edae64889f8b9a235ba45d15ba900fc35738. Jan 17 00:03:32.001854 containerd[1486]: time="2026-01-17T00:03:32.001809960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-798d7c56dc-ghv47,Uid:e2865d0a-d4d2-402d-89fc-69d90c7c76b9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"743163d9593f30b61614d5fee8e8850ebf304e0a35aec9d150fa94c58ac85d90\"" Jan 17 00:03:32.005250 containerd[1486]: time="2026-01-17T00:03:32.005148673Z" level=info msg="StopPodSandbox for \"5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b\"" Jan 17 00:03:32.008570 containerd[1486]: time="2026-01-17T00:03:32.005668912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:03:32.181916 containerd[1486]: 2026-01-17 00:03:32.106 [INFO][4456] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" Jan 17 00:03:32.181916 containerd[1486]: 2026-01-17 00:03:32.106 [INFO][4456] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" iface="eth0" netns="/var/run/netns/cni-6f3df65f-fc7f-d810-aba8-686ec98aeb73" Jan 17 00:03:32.181916 containerd[1486]: 2026-01-17 00:03:32.107 [INFO][4456] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" iface="eth0" netns="/var/run/netns/cni-6f3df65f-fc7f-d810-aba8-686ec98aeb73" Jan 17 00:03:32.181916 containerd[1486]: 2026-01-17 00:03:32.107 [INFO][4456] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" iface="eth0" netns="/var/run/netns/cni-6f3df65f-fc7f-d810-aba8-686ec98aeb73" Jan 17 00:03:32.181916 containerd[1486]: 2026-01-17 00:03:32.107 [INFO][4456] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" Jan 17 00:03:32.181916 containerd[1486]: 2026-01-17 00:03:32.107 [INFO][4456] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" Jan 17 00:03:32.181916 containerd[1486]: 2026-01-17 00:03:32.154 [INFO][4464] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" HandleID="k8s-pod-network.5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" Workload="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--v9lnn-eth0" Jan 17 00:03:32.181916 containerd[1486]: 2026-01-17 00:03:32.154 [INFO][4464] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:32.181916 containerd[1486]: 2026-01-17 00:03:32.154 [INFO][4464] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:32.181916 containerd[1486]: 2026-01-17 00:03:32.167 [WARNING][4464] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" HandleID="k8s-pod-network.5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" Workload="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--v9lnn-eth0" Jan 17 00:03:32.181916 containerd[1486]: 2026-01-17 00:03:32.168 [INFO][4464] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" HandleID="k8s-pod-network.5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" Workload="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--v9lnn-eth0" Jan 17 00:03:32.181916 containerd[1486]: 2026-01-17 00:03:32.171 [INFO][4464] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:32.181916 containerd[1486]: 2026-01-17 00:03:32.175 [INFO][4456] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" Jan 17 00:03:32.182376 containerd[1486]: time="2026-01-17T00:03:32.182192417Z" level=info msg="TearDown network for sandbox \"5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b\" successfully" Jan 17 00:03:32.182376 containerd[1486]: time="2026-01-17T00:03:32.182223337Z" level=info msg="StopPodSandbox for \"5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b\" returns successfully" Jan 17 00:03:32.188768 containerd[1486]: time="2026-01-17T00:03:32.188714763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-v9lnn,Uid:4408c45d-c746-4759-bf63-32d8b6b15581,Namespace:kube-system,Attempt:1,}" Jan 17 00:03:32.189248 systemd[1]: run-netns-cni\x2d1291e705\x2d62a5\x2d343b\x2d751d\x2ddfbcac197777.mount: Deactivated successfully. Jan 17 00:03:32.197333 systemd[1]: run-netns-cni\x2d6f3df65f\x2dfc7f\x2dd810\x2daba8\x2d686ec98aeb73.mount: Deactivated successfully. Jan 17 00:03:32.250969 containerd[1486]: time="2026-01-17T00:03:32.250869551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-798d7c56dc-6ghq5,Uid:3a9d9fee-4b98-43fe-862d-a1e26e86f2ee,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"bf0b625eb76ca0e6e7c4d4e3c8e3edae64889f8b9a235ba45d15ba900fc35738\"" Jan 17 00:03:32.344041 containerd[1486]: time="2026-01-17T00:03:32.343981274Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:32.346367 containerd[1486]: time="2026-01-17T00:03:32.345581910Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:03:32.346367 containerd[1486]: time="2026-01-17T00:03:32.345716110Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:03:32.346684 kubelet[2574]: E0117 00:03:32.345861 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:03:32.346684 kubelet[2574]: E0117 00:03:32.345914 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:03:32.346684 kubelet[2574]: E0117 00:03:32.346461 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-798d7c56dc-ghv47_calico-apiserver(e2865d0a-d4d2-402d-89fc-69d90c7c76b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:32.346684 kubelet[2574]: E0117 00:03:32.346559 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798d7c56dc-ghv47" podUID="e2865d0a-d4d2-402d-89fc-69d90c7c76b9" Jan 17 00:03:32.347915 containerd[1486]: time="2026-01-17T00:03:32.347700306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:03:32.368380 kubelet[2574]: I0117 00:03:32.368064 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-nkbkp" podStartSLOduration=43.368043343 podStartE2EDuration="43.368043343s" podCreationTimestamp="2026-01-17 00:02:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:03:32.33634053 +0000 UTC m=+49.475148857" watchObservedRunningTime="2026-01-17 00:03:32.368043343 +0000 UTC m=+49.506851550" Jan 17 00:03:32.430640 systemd-networkd[1370]: caliab675e29e06: Link UP Jan 17 00:03:32.432851 systemd-networkd[1370]: caliab675e29e06: Gained carrier Jan 17 00:03:32.464859 containerd[1486]: 2026-01-17 00:03:32.281 [INFO][4470] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--v9lnn-eth0 coredns-66bc5c9577- kube-system 4408c45d-c746-4759-bf63-32d8b6b15581 981 0 2026-01-17 00:02:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-089d3b6582 coredns-66bc5c9577-v9lnn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliab675e29e06 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="ccbdd315b3e281d25d4733f8fd25d816a16708e5f29a15c60a2ca5fc1fc2d0dc" Namespace="kube-system" Pod="coredns-66bc5c9577-v9lnn" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--v9lnn-" Jan 17 00:03:32.464859 containerd[1486]: 2026-01-17 00:03:32.281 [INFO][4470] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ccbdd315b3e281d25d4733f8fd25d816a16708e5f29a15c60a2ca5fc1fc2d0dc" Namespace="kube-system" Pod="coredns-66bc5c9577-v9lnn" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--v9lnn-eth0" Jan 17 00:03:32.464859 containerd[1486]: 2026-01-17 00:03:32.328 [INFO][4489] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ccbdd315b3e281d25d4733f8fd25d816a16708e5f29a15c60a2ca5fc1fc2d0dc" HandleID="k8s-pod-network.ccbdd315b3e281d25d4733f8fd25d816a16708e5f29a15c60a2ca5fc1fc2d0dc" Workload="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--v9lnn-eth0" Jan 17 00:03:32.464859 containerd[1486]: 2026-01-17 00:03:32.328 [INFO][4489] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ccbdd315b3e281d25d4733f8fd25d816a16708e5f29a15c60a2ca5fc1fc2d0dc" HandleID="k8s-pod-network.ccbdd315b3e281d25d4733f8fd25d816a16708e5f29a15c60a2ca5fc1fc2d0dc" Workload="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--v9lnn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b0f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-089d3b6582", "pod":"coredns-66bc5c9577-v9lnn", "timestamp":"2026-01-17 00:03:32.328432267 +0000 UTC"}, Hostname:"ci-4081-3-6-n-089d3b6582", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:03:32.464859 containerd[1486]: 2026-01-17 00:03:32.328 [INFO][4489] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:32.464859 containerd[1486]: 2026-01-17 00:03:32.328 [INFO][4489] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:32.464859 containerd[1486]: 2026-01-17 00:03:32.328 [INFO][4489] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-089d3b6582' Jan 17 00:03:32.464859 containerd[1486]: 2026-01-17 00:03:32.358 [INFO][4489] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ccbdd315b3e281d25d4733f8fd25d816a16708e5f29a15c60a2ca5fc1fc2d0dc" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:32.464859 containerd[1486]: 2026-01-17 00:03:32.372 [INFO][4489] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:32.464859 containerd[1486]: 2026-01-17 00:03:32.389 [INFO][4489] ipam/ipam.go 511: Trying affinity for 192.168.83.0/26 host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:32.464859 containerd[1486]: 2026-01-17 00:03:32.393 [INFO][4489] ipam/ipam.go 158: Attempting to load block cidr=192.168.83.0/26 host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:32.464859 containerd[1486]: 2026-01-17 00:03:32.398 [INFO][4489] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.83.0/26 host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:32.464859 containerd[1486]: 2026-01-17 00:03:32.398 [INFO][4489] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.83.0/26 handle="k8s-pod-network.ccbdd315b3e281d25d4733f8fd25d816a16708e5f29a15c60a2ca5fc1fc2d0dc" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:32.464859 containerd[1486]: 2026-01-17 00:03:32.402 [INFO][4489] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ccbdd315b3e281d25d4733f8fd25d816a16708e5f29a15c60a2ca5fc1fc2d0dc Jan 17 00:03:32.464859 containerd[1486]: 2026-01-17 00:03:32.410 [INFO][4489] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.83.0/26 handle="k8s-pod-network.ccbdd315b3e281d25d4733f8fd25d816a16708e5f29a15c60a2ca5fc1fc2d0dc" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:32.464859 containerd[1486]: 2026-01-17 00:03:32.421 [INFO][4489] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.83.5/26] block=192.168.83.0/26 handle="k8s-pod-network.ccbdd315b3e281d25d4733f8fd25d816a16708e5f29a15c60a2ca5fc1fc2d0dc" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:32.464859 containerd[1486]: 2026-01-17 00:03:32.422 [INFO][4489] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.83.5/26] handle="k8s-pod-network.ccbdd315b3e281d25d4733f8fd25d816a16708e5f29a15c60a2ca5fc1fc2d0dc" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:32.464859 containerd[1486]: 2026-01-17 00:03:32.422 [INFO][4489] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:32.464859 containerd[1486]: 2026-01-17 00:03:32.422 [INFO][4489] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.83.5/26] IPv6=[] ContainerID="ccbdd315b3e281d25d4733f8fd25d816a16708e5f29a15c60a2ca5fc1fc2d0dc" HandleID="k8s-pod-network.ccbdd315b3e281d25d4733f8fd25d816a16708e5f29a15c60a2ca5fc1fc2d0dc" Workload="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--v9lnn-eth0" Jan 17 00:03:32.465775 containerd[1486]: 2026-01-17 00:03:32.425 [INFO][4470] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ccbdd315b3e281d25d4733f8fd25d816a16708e5f29a15c60a2ca5fc1fc2d0dc" Namespace="kube-system" Pod="coredns-66bc5c9577-v9lnn" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--v9lnn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--v9lnn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4408c45d-c746-4759-bf63-32d8b6b15581", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-089d3b6582", ContainerID:"", Pod:"coredns-66bc5c9577-v9lnn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab675e29e06", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:32.465775 containerd[1486]: 2026-01-17 00:03:32.425 [INFO][4470] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.83.5/32] ContainerID="ccbdd315b3e281d25d4733f8fd25d816a16708e5f29a15c60a2ca5fc1fc2d0dc" Namespace="kube-system" Pod="coredns-66bc5c9577-v9lnn" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--v9lnn-eth0" Jan 17 00:03:32.465775 containerd[1486]: 2026-01-17 00:03:32.425 [INFO][4470] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab675e29e06 ContainerID="ccbdd315b3e281d25d4733f8fd25d816a16708e5f29a15c60a2ca5fc1fc2d0dc" Namespace="kube-system" Pod="coredns-66bc5c9577-v9lnn" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--v9lnn-eth0" Jan 17 00:03:32.465775 containerd[1486]: 2026-01-17 00:03:32.432 [INFO][4470] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ccbdd315b3e281d25d4733f8fd25d816a16708e5f29a15c60a2ca5fc1fc2d0dc" Namespace="kube-system" Pod="coredns-66bc5c9577-v9lnn" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--v9lnn-eth0" Jan 17 00:03:32.465775 containerd[1486]: 2026-01-17 00:03:32.440 [INFO][4470] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ccbdd315b3e281d25d4733f8fd25d816a16708e5f29a15c60a2ca5fc1fc2d0dc" Namespace="kube-system" Pod="coredns-66bc5c9577-v9lnn" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--v9lnn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--v9lnn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4408c45d-c746-4759-bf63-32d8b6b15581", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-089d3b6582", ContainerID:"ccbdd315b3e281d25d4733f8fd25d816a16708e5f29a15c60a2ca5fc1fc2d0dc", Pod:"coredns-66bc5c9577-v9lnn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab675e29e06", MAC:"be:34:49:9c:1b:73", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:32.466079 containerd[1486]: 2026-01-17 00:03:32.461 [INFO][4470] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ccbdd315b3e281d25d4733f8fd25d816a16708e5f29a15c60a2ca5fc1fc2d0dc" Namespace="kube-system" Pod="coredns-66bc5c9577-v9lnn" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--v9lnn-eth0" Jan 17 00:03:32.486048 containerd[1486]: time="2026-01-17T00:03:32.485751053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:03:32.486048 containerd[1486]: time="2026-01-17T00:03:32.485823853Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:03:32.486048 containerd[1486]: time="2026-01-17T00:03:32.485838893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:32.486048 containerd[1486]: time="2026-01-17T00:03:32.485965092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:32.515724 systemd[1]: Started cri-containerd-ccbdd315b3e281d25d4733f8fd25d816a16708e5f29a15c60a2ca5fc1fc2d0dc.scope - libcontainer container ccbdd315b3e281d25d4733f8fd25d816a16708e5f29a15c60a2ca5fc1fc2d0dc. Jan 17 00:03:32.554782 containerd[1486]: time="2026-01-17T00:03:32.554707867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-v9lnn,Uid:4408c45d-c746-4759-bf63-32d8b6b15581,Namespace:kube-system,Attempt:1,} returns sandbox id \"ccbdd315b3e281d25d4733f8fd25d816a16708e5f29a15c60a2ca5fc1fc2d0dc\"" Jan 17 00:03:32.562208 containerd[1486]: time="2026-01-17T00:03:32.562155331Z" level=info msg="CreateContainer within sandbox \"ccbdd315b3e281d25d4733f8fd25d816a16708e5f29a15c60a2ca5fc1fc2d0dc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:03:32.575893 containerd[1486]: time="2026-01-17T00:03:32.575843382Z" level=info msg="CreateContainer within sandbox \"ccbdd315b3e281d25d4733f8fd25d816a16708e5f29a15c60a2ca5fc1fc2d0dc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"41947c6bf6ba574d5e47a623985aae0a137d9daa7d0385f8d4dcbdebc57b1578\"" Jan 17 00:03:32.578719 containerd[1486]: time="2026-01-17T00:03:32.578616936Z" level=info msg="StartContainer for \"41947c6bf6ba574d5e47a623985aae0a137d9daa7d0385f8d4dcbdebc57b1578\"" Jan 17 00:03:32.615755 systemd[1]: Started cri-containerd-41947c6bf6ba574d5e47a623985aae0a137d9daa7d0385f8d4dcbdebc57b1578.scope - libcontainer container 41947c6bf6ba574d5e47a623985aae0a137d9daa7d0385f8d4dcbdebc57b1578. Jan 17 00:03:32.654876 containerd[1486]: time="2026-01-17T00:03:32.654752254Z" level=info msg="StartContainer for \"41947c6bf6ba574d5e47a623985aae0a137d9daa7d0385f8d4dcbdebc57b1578\" returns successfully" Jan 17 00:03:32.699780 containerd[1486]: time="2026-01-17T00:03:32.699614599Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:32.702265 containerd[1486]: time="2026-01-17T00:03:32.702175914Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:03:32.702388 containerd[1486]: time="2026-01-17T00:03:32.702209074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:03:32.702603 kubelet[2574]: E0117 00:03:32.702559 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:03:32.702788 kubelet[2574]: E0117 00:03:32.702611 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:03:32.702788 kubelet[2574]: E0117 00:03:32.702697 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-798d7c56dc-6ghq5_calico-apiserver(3a9d9fee-4b98-43fe-862d-a1e26e86f2ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:32.702788 kubelet[2574]: E0117 00:03:32.702736 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798d7c56dc-6ghq5" podUID="3a9d9fee-4b98-43fe-862d-a1e26e86f2ee" Jan 17 00:03:32.971686 systemd-networkd[1370]: calif46e10d7769: Gained IPv6LL Jan 17 00:03:33.162956 systemd-networkd[1370]: calid3059b58743: Gained IPv6LL Jan 17 00:03:33.335657 kubelet[2574]: E0117 00:03:33.335123 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798d7c56dc-6ghq5" podUID="3a9d9fee-4b98-43fe-862d-a1e26e86f2ee" Jan 17 00:03:33.335657 kubelet[2574]: E0117 00:03:33.335263 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798d7c56dc-ghv47" podUID="e2865d0a-d4d2-402d-89fc-69d90c7c76b9" Jan 17 00:03:33.351584 kubelet[2574]: I0117 00:03:33.350202 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-v9lnn" podStartSLOduration=44.350184825 podStartE2EDuration="44.350184825s" podCreationTimestamp="2026-01-17 00:02:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:03:33.350122705 +0000 UTC m=+50.488930912" watchObservedRunningTime="2026-01-17 00:03:33.350184825 +0000 UTC m=+50.488993032" Jan 17 00:03:33.420424 systemd-networkd[1370]: cali84b86588260: Gained IPv6LL Jan 17 00:03:33.997082 systemd-networkd[1370]: caliab675e29e06: Gained IPv6LL Jan 17 00:03:34.005861 containerd[1486]: time="2026-01-17T00:03:34.004347684Z" level=info msg="StopPodSandbox for \"ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37\"" Jan 17 00:03:34.005861 containerd[1486]: time="2026-01-17T00:03:34.004670603Z" level=info msg="StopPodSandbox for \"27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df\"" Jan 17 00:03:34.185036 containerd[1486]: 2026-01-17 00:03:34.103 [INFO][4615] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" Jan 17 00:03:34.185036 containerd[1486]: 2026-01-17 00:03:34.104 [INFO][4615] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" iface="eth0" netns="/var/run/netns/cni-de81cc5c-ae8d-0b41-e971-ce7983872e65" Jan 17 00:03:34.185036 containerd[1486]: 2026-01-17 00:03:34.105 [INFO][4615] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" iface="eth0" netns="/var/run/netns/cni-de81cc5c-ae8d-0b41-e971-ce7983872e65" Jan 17 00:03:34.185036 containerd[1486]: 2026-01-17 00:03:34.111 [INFO][4615] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" iface="eth0" netns="/var/run/netns/cni-de81cc5c-ae8d-0b41-e971-ce7983872e65" Jan 17 00:03:34.185036 containerd[1486]: 2026-01-17 00:03:34.111 [INFO][4615] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" Jan 17 00:03:34.185036 containerd[1486]: 2026-01-17 00:03:34.111 [INFO][4615] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" Jan 17 00:03:34.185036 containerd[1486]: 2026-01-17 00:03:34.158 [INFO][4630] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" HandleID="k8s-pod-network.ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" Workload="ci--4081--3--6--n--089d3b6582-k8s-goldmane--7c778bb748--txw7d-eth0" Jan 17 00:03:34.185036 containerd[1486]: 2026-01-17 00:03:34.158 [INFO][4630] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:34.185036 containerd[1486]: 2026-01-17 00:03:34.158 [INFO][4630] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:34.185036 containerd[1486]: 2026-01-17 00:03:34.171 [WARNING][4630] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" HandleID="k8s-pod-network.ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" Workload="ci--4081--3--6--n--089d3b6582-k8s-goldmane--7c778bb748--txw7d-eth0" Jan 17 00:03:34.185036 containerd[1486]: 2026-01-17 00:03:34.171 [INFO][4630] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" HandleID="k8s-pod-network.ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" Workload="ci--4081--3--6--n--089d3b6582-k8s-goldmane--7c778bb748--txw7d-eth0" Jan 17 00:03:34.185036 containerd[1486]: 2026-01-17 00:03:34.176 [INFO][4630] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:34.185036 containerd[1486]: 2026-01-17 00:03:34.179 [INFO][4615] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" Jan 17 00:03:34.191806 containerd[1486]: time="2026-01-17T00:03:34.188027381Z" level=info msg="TearDown network for sandbox \"ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37\" successfully" Jan 17 00:03:34.191806 containerd[1486]: time="2026-01-17T00:03:34.188080861Z" level=info msg="StopPodSandbox for \"ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37\" returns successfully" Jan 17 00:03:34.189677 systemd[1]: run-netns-cni\x2dde81cc5c\x2dae8d\x2d0b41\x2de971\x2dce7983872e65.mount: Deactivated successfully. Jan 17 00:03:34.194013 containerd[1486]: time="2026-01-17T00:03:34.193967730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-txw7d,Uid:362a3452-c30b-406b-9bbb-9543b4b09e90,Namespace:calico-system,Attempt:1,}" Jan 17 00:03:34.208852 containerd[1486]: 2026-01-17 00:03:34.108 [INFO][4616] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" Jan 17 00:03:34.208852 containerd[1486]: 2026-01-17 00:03:34.108 [INFO][4616] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" iface="eth0" netns="/var/run/netns/cni-80791c50-04fc-6895-e981-abb352923d73" Jan 17 00:03:34.208852 containerd[1486]: 2026-01-17 00:03:34.109 [INFO][4616] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" iface="eth0" netns="/var/run/netns/cni-80791c50-04fc-6895-e981-abb352923d73" Jan 17 00:03:34.208852 containerd[1486]: 2026-01-17 00:03:34.115 [INFO][4616] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" iface="eth0" netns="/var/run/netns/cni-80791c50-04fc-6895-e981-abb352923d73" Jan 17 00:03:34.208852 containerd[1486]: 2026-01-17 00:03:34.120 [INFO][4616] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" Jan 17 00:03:34.208852 containerd[1486]: 2026-01-17 00:03:34.120 [INFO][4616] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" Jan 17 00:03:34.208852 containerd[1486]: 2026-01-17 00:03:34.175 [INFO][4632] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" HandleID="k8s-pod-network.27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--kube--controllers--7d698fdbf4--vwrcc-eth0" Jan 17 00:03:34.208852 containerd[1486]: 2026-01-17 00:03:34.175 [INFO][4632] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:34.208852 containerd[1486]: 2026-01-17 00:03:34.176 [INFO][4632] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:34.208852 containerd[1486]: 2026-01-17 00:03:34.199 [WARNING][4632] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" HandleID="k8s-pod-network.27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--kube--controllers--7d698fdbf4--vwrcc-eth0" Jan 17 00:03:34.208852 containerd[1486]: 2026-01-17 00:03:34.199 [INFO][4632] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" HandleID="k8s-pod-network.27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--kube--controllers--7d698fdbf4--vwrcc-eth0" Jan 17 00:03:34.208852 containerd[1486]: 2026-01-17 00:03:34.203 [INFO][4632] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:34.208852 containerd[1486]: 2026-01-17 00:03:34.205 [INFO][4616] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" Jan 17 00:03:34.210730 containerd[1486]: time="2026-01-17T00:03:34.210594539Z" level=info msg="TearDown network for sandbox \"27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df\" successfully" Jan 17 00:03:34.210730 containerd[1486]: time="2026-01-17T00:03:34.210633299Z" level=info msg="StopPodSandbox for \"27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df\" returns successfully" Jan 17 00:03:34.215169 containerd[1486]: time="2026-01-17T00:03:34.213988333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d698fdbf4-vwrcc,Uid:a5e03e55-071e-4370-bbe3-a19857cfbfbd,Namespace:calico-system,Attempt:1,}" Jan 17 00:03:34.216788 systemd[1]: run-netns-cni\x2d80791c50\x2d04fc\x2d6895\x2de981\x2dabb352923d73.mount: Deactivated successfully. Jan 17 00:03:34.418442 systemd-networkd[1370]: calie9449b07005: Link UP Jan 17 00:03:34.422555 systemd-networkd[1370]: calie9449b07005: Gained carrier Jan 17 00:03:34.446340 containerd[1486]: 2026-01-17 00:03:34.288 [INFO][4644] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--089d3b6582-k8s-goldmane--7c778bb748--txw7d-eth0 goldmane-7c778bb748- calico-system 362a3452-c30b-406b-9bbb-9543b4b09e90 1026 0 2026-01-17 00:03:06 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-n-089d3b6582 goldmane-7c778bb748-txw7d eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie9449b07005 [] [] }} ContainerID="0e9fda153a2fb9320e7618f6ec36e87ccd42849fd85ae188c9bea5abf2413f4a" Namespace="calico-system" Pod="goldmane-7c778bb748-txw7d" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-goldmane--7c778bb748--txw7d-" Jan 17 00:03:34.446340 containerd[1486]: 2026-01-17 00:03:34.292 [INFO][4644] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0e9fda153a2fb9320e7618f6ec36e87ccd42849fd85ae188c9bea5abf2413f4a" Namespace="calico-system" Pod="goldmane-7c778bb748-txw7d" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-goldmane--7c778bb748--txw7d-eth0" Jan 17 00:03:34.446340 containerd[1486]: 2026-01-17 00:03:34.343 [INFO][4668] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0e9fda153a2fb9320e7618f6ec36e87ccd42849fd85ae188c9bea5abf2413f4a" HandleID="k8s-pod-network.0e9fda153a2fb9320e7618f6ec36e87ccd42849fd85ae188c9bea5abf2413f4a" Workload="ci--4081--3--6--n--089d3b6582-k8s-goldmane--7c778bb748--txw7d-eth0" Jan 17 00:03:34.446340 containerd[1486]: 2026-01-17 00:03:34.343 [INFO][4668] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0e9fda153a2fb9320e7618f6ec36e87ccd42849fd85ae188c9bea5abf2413f4a" HandleID="k8s-pod-network.0e9fda153a2fb9320e7618f6ec36e87ccd42849fd85ae188c9bea5abf2413f4a" Workload="ci--4081--3--6--n--089d3b6582-k8s-goldmane--7c778bb748--txw7d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3070), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-089d3b6582", "pod":"goldmane-7c778bb748-txw7d", "timestamp":"2026-01-17 00:03:34.343430131 +0000 UTC"}, Hostname:"ci-4081-3-6-n-089d3b6582", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:03:34.446340 containerd[1486]: 2026-01-17 00:03:34.343 [INFO][4668] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:34.446340 containerd[1486]: 2026-01-17 00:03:34.343 [INFO][4668] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:34.446340 containerd[1486]: 2026-01-17 00:03:34.343 [INFO][4668] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-089d3b6582' Jan 17 00:03:34.446340 containerd[1486]: 2026-01-17 00:03:34.360 [INFO][4668] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0e9fda153a2fb9320e7618f6ec36e87ccd42849fd85ae188c9bea5abf2413f4a" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:34.446340 containerd[1486]: 2026-01-17 00:03:34.371 [INFO][4668] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:34.446340 containerd[1486]: 2026-01-17 00:03:34.378 [INFO][4668] ipam/ipam.go 511: Trying affinity for 192.168.83.0/26 host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:34.446340 containerd[1486]: 2026-01-17 00:03:34.381 [INFO][4668] ipam/ipam.go 158: Attempting to load block cidr=192.168.83.0/26 host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:34.446340 containerd[1486]: 2026-01-17 00:03:34.386 [INFO][4668] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.83.0/26 host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:34.446340 containerd[1486]: 2026-01-17 00:03:34.386 [INFO][4668] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.83.0/26 handle="k8s-pod-network.0e9fda153a2fb9320e7618f6ec36e87ccd42849fd85ae188c9bea5abf2413f4a" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:34.446340 containerd[1486]: 2026-01-17 00:03:34.390 [INFO][4668] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0e9fda153a2fb9320e7618f6ec36e87ccd42849fd85ae188c9bea5abf2413f4a Jan 17 00:03:34.446340 containerd[1486]: 2026-01-17 00:03:34.396 [INFO][4668] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.83.0/26 handle="k8s-pod-network.0e9fda153a2fb9320e7618f6ec36e87ccd42849fd85ae188c9bea5abf2413f4a" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:34.446340 containerd[1486]: 2026-01-17 00:03:34.406 [INFO][4668] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.83.6/26] block=192.168.83.0/26 handle="k8s-pod-network.0e9fda153a2fb9320e7618f6ec36e87ccd42849fd85ae188c9bea5abf2413f4a" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:34.446340 containerd[1486]: 2026-01-17 00:03:34.406 [INFO][4668] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.83.6/26] handle="k8s-pod-network.0e9fda153a2fb9320e7618f6ec36e87ccd42849fd85ae188c9bea5abf2413f4a" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:34.446340 containerd[1486]: 2026-01-17 00:03:34.406 [INFO][4668] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:34.446340 containerd[1486]: 2026-01-17 00:03:34.406 [INFO][4668] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.83.6/26] IPv6=[] ContainerID="0e9fda153a2fb9320e7618f6ec36e87ccd42849fd85ae188c9bea5abf2413f4a" HandleID="k8s-pod-network.0e9fda153a2fb9320e7618f6ec36e87ccd42849fd85ae188c9bea5abf2413f4a" Workload="ci--4081--3--6--n--089d3b6582-k8s-goldmane--7c778bb748--txw7d-eth0" Jan 17 00:03:34.447688 containerd[1486]: 2026-01-17 00:03:34.410 [INFO][4644] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0e9fda153a2fb9320e7618f6ec36e87ccd42849fd85ae188c9bea5abf2413f4a" Namespace="calico-system" Pod="goldmane-7c778bb748-txw7d" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-goldmane--7c778bb748--txw7d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--089d3b6582-k8s-goldmane--7c778bb748--txw7d-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"362a3452-c30b-406b-9bbb-9543b4b09e90", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 3, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-089d3b6582", ContainerID:"", Pod:"goldmane-7c778bb748-txw7d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.83.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie9449b07005", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:34.447688 containerd[1486]: 2026-01-17 00:03:34.411 [INFO][4644] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.83.6/32] ContainerID="0e9fda153a2fb9320e7618f6ec36e87ccd42849fd85ae188c9bea5abf2413f4a" Namespace="calico-system" Pod="goldmane-7c778bb748-txw7d" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-goldmane--7c778bb748--txw7d-eth0" Jan 17 00:03:34.447688 containerd[1486]: 2026-01-17 00:03:34.411 [INFO][4644] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie9449b07005 ContainerID="0e9fda153a2fb9320e7618f6ec36e87ccd42849fd85ae188c9bea5abf2413f4a" Namespace="calico-system" Pod="goldmane-7c778bb748-txw7d" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-goldmane--7c778bb748--txw7d-eth0" Jan 17 00:03:34.447688 containerd[1486]: 2026-01-17 00:03:34.421 [INFO][4644] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0e9fda153a2fb9320e7618f6ec36e87ccd42849fd85ae188c9bea5abf2413f4a" Namespace="calico-system" Pod="goldmane-7c778bb748-txw7d" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-goldmane--7c778bb748--txw7d-eth0" Jan 17 00:03:34.447688 containerd[1486]: 2026-01-17 00:03:34.421 [INFO][4644] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0e9fda153a2fb9320e7618f6ec36e87ccd42849fd85ae188c9bea5abf2413f4a" Namespace="calico-system" Pod="goldmane-7c778bb748-txw7d" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-goldmane--7c778bb748--txw7d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--089d3b6582-k8s-goldmane--7c778bb748--txw7d-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"362a3452-c30b-406b-9bbb-9543b4b09e90", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 3, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-089d3b6582", ContainerID:"0e9fda153a2fb9320e7618f6ec36e87ccd42849fd85ae188c9bea5abf2413f4a", Pod:"goldmane-7c778bb748-txw7d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.83.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie9449b07005", MAC:"ce:6e:21:ab:de:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:34.447688 containerd[1486]: 2026-01-17 00:03:34.443 [INFO][4644] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0e9fda153a2fb9320e7618f6ec36e87ccd42849fd85ae188c9bea5abf2413f4a" Namespace="calico-system" Pod="goldmane-7c778bb748-txw7d" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-goldmane--7c778bb748--txw7d-eth0" Jan 17 00:03:34.482180 containerd[1486]: time="2026-01-17T00:03:34.482029553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:03:34.482180 containerd[1486]: time="2026-01-17T00:03:34.482097472Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:03:34.482180 containerd[1486]: time="2026-01-17T00:03:34.482117592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:34.482672 containerd[1486]: time="2026-01-17T00:03:34.482222272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:34.512767 systemd[1]: Started cri-containerd-0e9fda153a2fb9320e7618f6ec36e87ccd42849fd85ae188c9bea5abf2413f4a.scope - libcontainer container 0e9fda153a2fb9320e7618f6ec36e87ccd42849fd85ae188c9bea5abf2413f4a. Jan 17 00:03:34.538960 systemd-networkd[1370]: calif5bd7382ce0: Link UP Jan 17 00:03:34.540445 systemd-networkd[1370]: calif5bd7382ce0: Gained carrier Jan 17 00:03:34.571226 containerd[1486]: 2026-01-17 00:03:34.327 [INFO][4654] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--089d3b6582-k8s-calico--kube--controllers--7d698fdbf4--vwrcc-eth0 calico-kube-controllers-7d698fdbf4- calico-system a5e03e55-071e-4370-bbe3-a19857cfbfbd 1027 0 2026-01-17 00:03:09 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7d698fdbf4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-n-089d3b6582 calico-kube-controllers-7d698fdbf4-vwrcc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif5bd7382ce0 [] [] }} ContainerID="70a4496f4998afc4647af7fcd192e92f6394edbca89adf672be5e00a7fecff46" Namespace="calico-system" Pod="calico-kube-controllers-7d698fdbf4-vwrcc" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-calico--kube--controllers--7d698fdbf4--vwrcc-" Jan 17 00:03:34.571226 containerd[1486]: 2026-01-17 00:03:34.327 [INFO][4654] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70a4496f4998afc4647af7fcd192e92f6394edbca89adf672be5e00a7fecff46" Namespace="calico-system" Pod="calico-kube-controllers-7d698fdbf4-vwrcc" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-calico--kube--controllers--7d698fdbf4--vwrcc-eth0" Jan 17 00:03:34.571226 containerd[1486]: 2026-01-17 00:03:34.372 [INFO][4675] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70a4496f4998afc4647af7fcd192e92f6394edbca89adf672be5e00a7fecff46" HandleID="k8s-pod-network.70a4496f4998afc4647af7fcd192e92f6394edbca89adf672be5e00a7fecff46" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--kube--controllers--7d698fdbf4--vwrcc-eth0" Jan 17 00:03:34.571226 containerd[1486]: 2026-01-17 00:03:34.372 [INFO][4675] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="70a4496f4998afc4647af7fcd192e92f6394edbca89adf672be5e00a7fecff46" HandleID="k8s-pod-network.70a4496f4998afc4647af7fcd192e92f6394edbca89adf672be5e00a7fecff46" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--kube--controllers--7d698fdbf4--vwrcc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-089d3b6582", "pod":"calico-kube-controllers-7d698fdbf4-vwrcc", "timestamp":"2026-01-17 00:03:34.372384357 +0000 UTC"}, Hostname:"ci-4081-3-6-n-089d3b6582", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:03:34.571226 containerd[1486]: 2026-01-17 00:03:34.372 [INFO][4675] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:34.571226 containerd[1486]: 2026-01-17 00:03:34.406 [INFO][4675] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:34.571226 containerd[1486]: 2026-01-17 00:03:34.406 [INFO][4675] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-089d3b6582' Jan 17 00:03:34.571226 containerd[1486]: 2026-01-17 00:03:34.460 [INFO][4675] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.70a4496f4998afc4647af7fcd192e92f6394edbca89adf672be5e00a7fecff46" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:34.571226 containerd[1486]: 2026-01-17 00:03:34.474 [INFO][4675] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:34.571226 containerd[1486]: 2026-01-17 00:03:34.493 [INFO][4675] ipam/ipam.go 511: Trying affinity for 192.168.83.0/26 host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:34.571226 containerd[1486]: 2026-01-17 00:03:34.497 [INFO][4675] ipam/ipam.go 158: Attempting to load block cidr=192.168.83.0/26 host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:34.571226 containerd[1486]: 2026-01-17 00:03:34.501 [INFO][4675] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.83.0/26 host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:34.571226 containerd[1486]: 2026-01-17 00:03:34.501 [INFO][4675] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.83.0/26 handle="k8s-pod-network.70a4496f4998afc4647af7fcd192e92f6394edbca89adf672be5e00a7fecff46" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:34.571226 containerd[1486]: 2026-01-17 00:03:34.505 [INFO][4675] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.70a4496f4998afc4647af7fcd192e92f6394edbca89adf672be5e00a7fecff46 Jan 17 00:03:34.571226 containerd[1486]: 2026-01-17 00:03:34.512 [INFO][4675] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.83.0/26 handle="k8s-pod-network.70a4496f4998afc4647af7fcd192e92f6394edbca89adf672be5e00a7fecff46" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:34.571226 containerd[1486]: 2026-01-17 00:03:34.526 [INFO][4675] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.83.7/26] block=192.168.83.0/26 handle="k8s-pod-network.70a4496f4998afc4647af7fcd192e92f6394edbca89adf672be5e00a7fecff46" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:34.571226 containerd[1486]: 2026-01-17 00:03:34.527 [INFO][4675] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.83.7/26] handle="k8s-pod-network.70a4496f4998afc4647af7fcd192e92f6394edbca89adf672be5e00a7fecff46" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:34.571226 containerd[1486]: 2026-01-17 00:03:34.527 [INFO][4675] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:34.571226 containerd[1486]: 2026-01-17 00:03:34.527 [INFO][4675] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.83.7/26] IPv6=[] ContainerID="70a4496f4998afc4647af7fcd192e92f6394edbca89adf672be5e00a7fecff46" HandleID="k8s-pod-network.70a4496f4998afc4647af7fcd192e92f6394edbca89adf672be5e00a7fecff46" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--kube--controllers--7d698fdbf4--vwrcc-eth0" Jan 17 00:03:34.572545 containerd[1486]: 2026-01-17 00:03:34.533 [INFO][4654] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70a4496f4998afc4647af7fcd192e92f6394edbca89adf672be5e00a7fecff46" Namespace="calico-system" Pod="calico-kube-controllers-7d698fdbf4-vwrcc" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-calico--kube--controllers--7d698fdbf4--vwrcc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--089d3b6582-k8s-calico--kube--controllers--7d698fdbf4--vwrcc-eth0", GenerateName:"calico-kube-controllers-7d698fdbf4-", Namespace:"calico-system", SelfLink:"", UID:"a5e03e55-071e-4370-bbe3-a19857cfbfbd", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d698fdbf4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-089d3b6582", ContainerID:"", Pod:"calico-kube-controllers-7d698fdbf4-vwrcc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.83.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif5bd7382ce0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:34.572545 containerd[1486]: 2026-01-17 00:03:34.533 [INFO][4654] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.83.7/32] ContainerID="70a4496f4998afc4647af7fcd192e92f6394edbca89adf672be5e00a7fecff46" Namespace="calico-system" Pod="calico-kube-controllers-7d698fdbf4-vwrcc" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-calico--kube--controllers--7d698fdbf4--vwrcc-eth0" Jan 17 00:03:34.572545 containerd[1486]: 2026-01-17 00:03:34.533 [INFO][4654] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif5bd7382ce0 ContainerID="70a4496f4998afc4647af7fcd192e92f6394edbca89adf672be5e00a7fecff46" Namespace="calico-system" Pod="calico-kube-controllers-7d698fdbf4-vwrcc" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-calico--kube--controllers--7d698fdbf4--vwrcc-eth0" Jan 17 00:03:34.572545 containerd[1486]: 2026-01-17 00:03:34.547 [INFO][4654] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70a4496f4998afc4647af7fcd192e92f6394edbca89adf672be5e00a7fecff46" Namespace="calico-system" Pod="calico-kube-controllers-7d698fdbf4-vwrcc" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-calico--kube--controllers--7d698fdbf4--vwrcc-eth0" Jan 17 00:03:34.572545 containerd[1486]: 2026-01-17 00:03:34.549 [INFO][4654] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70a4496f4998afc4647af7fcd192e92f6394edbca89adf672be5e00a7fecff46" Namespace="calico-system" Pod="calico-kube-controllers-7d698fdbf4-vwrcc" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-calico--kube--controllers--7d698fdbf4--vwrcc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--089d3b6582-k8s-calico--kube--controllers--7d698fdbf4--vwrcc-eth0", GenerateName:"calico-kube-controllers-7d698fdbf4-", Namespace:"calico-system", SelfLink:"", UID:"a5e03e55-071e-4370-bbe3-a19857cfbfbd", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d698fdbf4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-089d3b6582", ContainerID:"70a4496f4998afc4647af7fcd192e92f6394edbca89adf672be5e00a7fecff46", Pod:"calico-kube-controllers-7d698fdbf4-vwrcc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.83.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif5bd7382ce0", MAC:"6a:a8:44:ce:eb:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:34.572545 containerd[1486]: 2026-01-17 00:03:34.565 [INFO][4654] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70a4496f4998afc4647af7fcd192e92f6394edbca89adf672be5e00a7fecff46" Namespace="calico-system" Pod="calico-kube-controllers-7d698fdbf4-vwrcc" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-calico--kube--controllers--7d698fdbf4--vwrcc-eth0" Jan 17 00:03:34.596892 containerd[1486]: time="2026-01-17T00:03:34.596750019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:03:34.597224 containerd[1486]: time="2026-01-17T00:03:34.597009178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:03:34.597224 containerd[1486]: time="2026-01-17T00:03:34.597049618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:34.597224 containerd[1486]: time="2026-01-17T00:03:34.597168898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:34.613672 containerd[1486]: time="2026-01-17T00:03:34.613626667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-txw7d,Uid:362a3452-c30b-406b-9bbb-9543b4b09e90,Namespace:calico-system,Attempt:1,} returns sandbox id \"0e9fda153a2fb9320e7618f6ec36e87ccd42849fd85ae188c9bea5abf2413f4a\"" Jan 17 00:03:34.617813 containerd[1486]: time="2026-01-17T00:03:34.617599940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:03:34.634937 systemd[1]: Started cri-containerd-70a4496f4998afc4647af7fcd192e92f6394edbca89adf672be5e00a7fecff46.scope - libcontainer container 70a4496f4998afc4647af7fcd192e92f6394edbca89adf672be5e00a7fecff46. Jan 17 00:03:34.689525 containerd[1486]: time="2026-01-17T00:03:34.689310086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d698fdbf4-vwrcc,Uid:a5e03e55-071e-4370-bbe3-a19857cfbfbd,Namespace:calico-system,Attempt:1,} returns sandbox id \"70a4496f4998afc4647af7fcd192e92f6394edbca89adf672be5e00a7fecff46\"" Jan 17 00:03:34.955766 containerd[1486]: time="2026-01-17T00:03:34.955497949Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:34.957248 containerd[1486]: time="2026-01-17T00:03:34.957183786Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:03:34.957366 containerd[1486]: time="2026-01-17T00:03:34.957310386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:03:34.957658 kubelet[2574]: E0117 00:03:34.957471 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:03:34.957658 kubelet[2574]: E0117 00:03:34.957559 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:03:34.958267 kubelet[2574]: E0117 00:03:34.957773 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-txw7d_calico-system(362a3452-c30b-406b-9bbb-9543b4b09e90): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:34.958267 kubelet[2574]: E0117 00:03:34.957807 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-txw7d" podUID="362a3452-c30b-406b-9bbb-9543b4b09e90" Jan 17 00:03:34.959594 containerd[1486]: time="2026-01-17T00:03:34.958847583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:03:35.007867 containerd[1486]: time="2026-01-17T00:03:35.007745373Z" level=info msg="StopPodSandbox for \"a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304\"" Jan 17 00:03:35.122215 containerd[1486]: 2026-01-17 00:03:35.073 [INFO][4794] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" Jan 17 00:03:35.122215 containerd[1486]: 2026-01-17 00:03:35.073 [INFO][4794] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" iface="eth0" netns="/var/run/netns/cni-c4f1d8cd-71b2-cdef-0fca-a08e671e50fa" Jan 17 00:03:35.122215 containerd[1486]: 2026-01-17 00:03:35.074 [INFO][4794] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" iface="eth0" netns="/var/run/netns/cni-c4f1d8cd-71b2-cdef-0fca-a08e671e50fa" Jan 17 00:03:35.122215 containerd[1486]: 2026-01-17 00:03:35.074 [INFO][4794] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" iface="eth0" netns="/var/run/netns/cni-c4f1d8cd-71b2-cdef-0fca-a08e671e50fa" Jan 17 00:03:35.122215 containerd[1486]: 2026-01-17 00:03:35.074 [INFO][4794] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" Jan 17 00:03:35.122215 containerd[1486]: 2026-01-17 00:03:35.074 [INFO][4794] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" Jan 17 00:03:35.122215 containerd[1486]: 2026-01-17 00:03:35.100 [INFO][4802] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" HandleID="k8s-pod-network.a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" Workload="ci--4081--3--6--n--089d3b6582-k8s-csi--node--driver--rctkw-eth0" Jan 17 00:03:35.122215 containerd[1486]: 2026-01-17 00:03:35.100 [INFO][4802] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:35.122215 containerd[1486]: 2026-01-17 00:03:35.100 [INFO][4802] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:35.122215 containerd[1486]: 2026-01-17 00:03:35.110 [WARNING][4802] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" HandleID="k8s-pod-network.a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" Workload="ci--4081--3--6--n--089d3b6582-k8s-csi--node--driver--rctkw-eth0" Jan 17 00:03:35.122215 containerd[1486]: 2026-01-17 00:03:35.110 [INFO][4802] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" HandleID="k8s-pod-network.a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" Workload="ci--4081--3--6--n--089d3b6582-k8s-csi--node--driver--rctkw-eth0" Jan 17 00:03:35.122215 containerd[1486]: 2026-01-17 00:03:35.113 [INFO][4802] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:35.122215 containerd[1486]: 2026-01-17 00:03:35.117 [INFO][4794] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" Jan 17 00:03:35.122215 containerd[1486]: time="2026-01-17T00:03:35.121728613Z" level=info msg="TearDown network for sandbox \"a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304\" successfully" Jan 17 00:03:35.122215 containerd[1486]: time="2026-01-17T00:03:35.121787773Z" level=info msg="StopPodSandbox for \"a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304\" returns successfully" Jan 17 00:03:35.126406 containerd[1486]: time="2026-01-17T00:03:35.126329205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rctkw,Uid:e730921e-fe6a-4325-b721-055844e798ac,Namespace:calico-system,Attempt:1,}" Jan 17 00:03:35.191202 systemd[1]: run-netns-cni\x2dc4f1d8cd\x2d71b2\x2dcdef\x2d0fca\x2da08e671e50fa.mount: Deactivated successfully. Jan 17 00:03:35.285108 systemd-networkd[1370]: cali5802a36a103: Link UP Jan 17 00:03:35.285344 systemd-networkd[1370]: cali5802a36a103: Gained carrier Jan 17 00:03:35.306067 containerd[1486]: 2026-01-17 00:03:35.180 [INFO][4809] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--089d3b6582-k8s-csi--node--driver--rctkw-eth0 csi-node-driver- calico-system e730921e-fe6a-4325-b721-055844e798ac 1041 0 2026-01-17 00:03:09 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-n-089d3b6582 csi-node-driver-rctkw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5802a36a103 [] [] }} ContainerID="9fa65da12df338e255c26819778c7b867f919d7816a936def2665a155d5bed85" Namespace="calico-system" Pod="csi-node-driver-rctkw" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-csi--node--driver--rctkw-" Jan 17 00:03:35.306067 containerd[1486]: 2026-01-17 00:03:35.180 [INFO][4809] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9fa65da12df338e255c26819778c7b867f919d7816a936def2665a155d5bed85" Namespace="calico-system" Pod="csi-node-driver-rctkw" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-csi--node--driver--rctkw-eth0" Jan 17 00:03:35.306067 containerd[1486]: 2026-01-17 00:03:35.220 [INFO][4821] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9fa65da12df338e255c26819778c7b867f919d7816a936def2665a155d5bed85" HandleID="k8s-pod-network.9fa65da12df338e255c26819778c7b867f919d7816a936def2665a155d5bed85" Workload="ci--4081--3--6--n--089d3b6582-k8s-csi--node--driver--rctkw-eth0" Jan 17 00:03:35.306067 containerd[1486]: 2026-01-17 00:03:35.220 [INFO][4821] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9fa65da12df338e255c26819778c7b867f919d7816a936def2665a155d5bed85" HandleID="k8s-pod-network.9fa65da12df338e255c26819778c7b867f919d7816a936def2665a155d5bed85" Workload="ci--4081--3--6--n--089d3b6582-k8s-csi--node--driver--rctkw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b5d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-089d3b6582", "pod":"csi-node-driver-rctkw", "timestamp":"2026-01-17 00:03:35.220277121 +0000 UTC"}, Hostname:"ci-4081-3-6-n-089d3b6582", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:03:35.306067 containerd[1486]: 2026-01-17 00:03:35.220 [INFO][4821] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:35.306067 containerd[1486]: 2026-01-17 00:03:35.220 [INFO][4821] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:35.306067 containerd[1486]: 2026-01-17 00:03:35.220 [INFO][4821] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-089d3b6582' Jan 17 00:03:35.306067 containerd[1486]: 2026-01-17 00:03:35.235 [INFO][4821] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9fa65da12df338e255c26819778c7b867f919d7816a936def2665a155d5bed85" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:35.306067 containerd[1486]: 2026-01-17 00:03:35.242 [INFO][4821] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:35.306067 containerd[1486]: 2026-01-17 00:03:35.249 [INFO][4821] ipam/ipam.go 511: Trying affinity for 192.168.83.0/26 host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:35.306067 containerd[1486]: 2026-01-17 00:03:35.251 [INFO][4821] ipam/ipam.go 158: Attempting to load block cidr=192.168.83.0/26 host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:35.306067 containerd[1486]: 2026-01-17 00:03:35.255 [INFO][4821] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.83.0/26 host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:35.306067 containerd[1486]: 2026-01-17 00:03:35.255 [INFO][4821] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.83.0/26 handle="k8s-pod-network.9fa65da12df338e255c26819778c7b867f919d7816a936def2665a155d5bed85" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:35.306067 containerd[1486]: 2026-01-17 00:03:35.258 [INFO][4821] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9fa65da12df338e255c26819778c7b867f919d7816a936def2665a155d5bed85 Jan 17 00:03:35.306067 containerd[1486]: 2026-01-17 00:03:35.264 [INFO][4821] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.83.0/26 handle="k8s-pod-network.9fa65da12df338e255c26819778c7b867f919d7816a936def2665a155d5bed85" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:35.306067 containerd[1486]: 2026-01-17 00:03:35.277 [INFO][4821] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.83.8/26] block=192.168.83.0/26 handle="k8s-pod-network.9fa65da12df338e255c26819778c7b867f919d7816a936def2665a155d5bed85" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:35.306067 containerd[1486]: 2026-01-17 00:03:35.277 [INFO][4821] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.83.8/26] handle="k8s-pod-network.9fa65da12df338e255c26819778c7b867f919d7816a936def2665a155d5bed85" host="ci-4081-3-6-n-089d3b6582" Jan 17 00:03:35.306067 containerd[1486]: 2026-01-17 00:03:35.277 [INFO][4821] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:35.306067 containerd[1486]: 2026-01-17 00:03:35.277 [INFO][4821] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.83.8/26] IPv6=[] ContainerID="9fa65da12df338e255c26819778c7b867f919d7816a936def2665a155d5bed85" HandleID="k8s-pod-network.9fa65da12df338e255c26819778c7b867f919d7816a936def2665a155d5bed85" Workload="ci--4081--3--6--n--089d3b6582-k8s-csi--node--driver--rctkw-eth0" Jan 17 00:03:35.307718 containerd[1486]: 2026-01-17 00:03:35.279 [INFO][4809] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9fa65da12df338e255c26819778c7b867f919d7816a936def2665a155d5bed85" Namespace="calico-system" Pod="csi-node-driver-rctkw" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-csi--node--driver--rctkw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--089d3b6582-k8s-csi--node--driver--rctkw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e730921e-fe6a-4325-b721-055844e798ac", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-089d3b6582", ContainerID:"", Pod:"csi-node-driver-rctkw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.83.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5802a36a103", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:35.307718 containerd[1486]: 2026-01-17 00:03:35.279 [INFO][4809] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.83.8/32] ContainerID="9fa65da12df338e255c26819778c7b867f919d7816a936def2665a155d5bed85" Namespace="calico-system" Pod="csi-node-driver-rctkw" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-csi--node--driver--rctkw-eth0" Jan 17 00:03:35.307718 containerd[1486]: 2026-01-17 00:03:35.279 [INFO][4809] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5802a36a103 ContainerID="9fa65da12df338e255c26819778c7b867f919d7816a936def2665a155d5bed85" Namespace="calico-system" Pod="csi-node-driver-rctkw" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-csi--node--driver--rctkw-eth0" Jan 17 00:03:35.307718 containerd[1486]: 2026-01-17 00:03:35.284 [INFO][4809] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9fa65da12df338e255c26819778c7b867f919d7816a936def2665a155d5bed85" Namespace="calico-system" Pod="csi-node-driver-rctkw" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-csi--node--driver--rctkw-eth0" Jan 17 00:03:35.307718 containerd[1486]: 2026-01-17 00:03:35.284 [INFO][4809] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9fa65da12df338e255c26819778c7b867f919d7816a936def2665a155d5bed85" Namespace="calico-system" Pod="csi-node-driver-rctkw" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-csi--node--driver--rctkw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--089d3b6582-k8s-csi--node--driver--rctkw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e730921e-fe6a-4325-b721-055844e798ac", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-089d3b6582", ContainerID:"9fa65da12df338e255c26819778c7b867f919d7816a936def2665a155d5bed85", Pod:"csi-node-driver-rctkw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.83.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5802a36a103", MAC:"7e:20:e0:9f:e8:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:35.307718 containerd[1486]: 2026-01-17 00:03:35.301 [INFO][4809] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9fa65da12df338e255c26819778c7b867f919d7816a936def2665a155d5bed85" Namespace="calico-system" Pod="csi-node-driver-rctkw" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-csi--node--driver--rctkw-eth0" Jan 17 00:03:35.307718 containerd[1486]: time="2026-01-17T00:03:35.305354652Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:35.308565 containerd[1486]: time="2026-01-17T00:03:35.308086927Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:03:35.308565 containerd[1486]: time="2026-01-17T00:03:35.308234767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:03:35.308883 kubelet[2574]: E0117 00:03:35.308754 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:03:35.308883 kubelet[2574]: E0117 00:03:35.308892 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:03:35.309181 kubelet[2574]: E0117 00:03:35.309019 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7d698fdbf4-vwrcc_calico-system(a5e03e55-071e-4370-bbe3-a19857cfbfbd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:35.309181 kubelet[2574]: E0117 00:03:35.309158 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d698fdbf4-vwrcc" podUID="a5e03e55-071e-4370-bbe3-a19857cfbfbd" Jan 17 00:03:35.332381 containerd[1486]: time="2026-01-17T00:03:35.332105205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:03:35.332381 containerd[1486]: time="2026-01-17T00:03:35.332336125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:03:35.332638 containerd[1486]: time="2026-01-17T00:03:35.332426565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:35.333049 containerd[1486]: time="2026-01-17T00:03:35.332984724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:35.371741 systemd[1]: Started cri-containerd-9fa65da12df338e255c26819778c7b867f919d7816a936def2665a155d5bed85.scope - libcontainer container 9fa65da12df338e255c26819778c7b867f919d7816a936def2665a155d5bed85. Jan 17 00:03:35.382499 kubelet[2574]: E0117 00:03:35.381851 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-txw7d" podUID="362a3452-c30b-406b-9bbb-9543b4b09e90" Jan 17 00:03:35.389713 kubelet[2574]: E0117 00:03:35.389652 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d698fdbf4-vwrcc" podUID="a5e03e55-071e-4370-bbe3-a19857cfbfbd" Jan 17 00:03:35.436773 containerd[1486]: time="2026-01-17T00:03:35.436709463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rctkw,Uid:e730921e-fe6a-4325-b721-055844e798ac,Namespace:calico-system,Attempt:1,} returns sandbox id \"9fa65da12df338e255c26819778c7b867f919d7816a936def2665a155d5bed85\"" Jan 17 00:03:35.440565 containerd[1486]: time="2026-01-17T00:03:35.440081297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:03:35.799123 containerd[1486]: time="2026-01-17T00:03:35.798838509Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:35.801295 containerd[1486]: time="2026-01-17T00:03:35.801099065Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:03:35.801295 containerd[1486]: time="2026-01-17T00:03:35.801252745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:03:35.803453 kubelet[2574]: E0117 00:03:35.802121 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:03:35.803453 kubelet[2574]: E0117 00:03:35.802177 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:03:35.803453 kubelet[2574]: E0117 00:03:35.802261 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-rctkw_calico-system(e730921e-fe6a-4325-b721-055844e798ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:35.805819 containerd[1486]: time="2026-01-17T00:03:35.805699697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:03:35.978670 systemd-networkd[1370]: calie9449b07005: Gained IPv6LL Jan 17 00:03:36.147174 containerd[1486]: time="2026-01-17T00:03:36.147120196Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:36.148466 containerd[1486]: time="2026-01-17T00:03:36.148344594Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:03:36.148466 containerd[1486]: time="2026-01-17T00:03:36.148418314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:03:36.150372 kubelet[2574]: E0117 00:03:36.149082 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:03:36.150372 kubelet[2574]: E0117 00:03:36.149132 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:03:36.150372 kubelet[2574]: E0117 00:03:36.149213 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-rctkw_calico-system(e730921e-fe6a-4325-b721-055844e798ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:36.150804 kubelet[2574]: E0117 00:03:36.149260 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rctkw" podUID="e730921e-fe6a-4325-b721-055844e798ac" Jan 17 00:03:36.363811 systemd-networkd[1370]: calif5bd7382ce0: Gained IPv6LL Jan 17 00:03:36.393072 kubelet[2574]: E0117 00:03:36.392820 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d698fdbf4-vwrcc" podUID="a5e03e55-071e-4370-bbe3-a19857cfbfbd" Jan 17 00:03:36.393072 kubelet[2574]: E0117 00:03:36.392953 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-txw7d" podUID="362a3452-c30b-406b-9bbb-9543b4b09e90" Jan 17 00:03:36.395988 kubelet[2574]: E0117 00:03:36.395935 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rctkw" podUID="e730921e-fe6a-4325-b721-055844e798ac" Jan 17 00:03:37.324435 systemd-networkd[1370]: cali5802a36a103: Gained IPv6LL Jan 17 00:03:37.394184 kubelet[2574]: E0117 00:03:37.394011 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rctkw" podUID="e730921e-fe6a-4325-b721-055844e798ac" Jan 17 00:03:42.006822 containerd[1486]: time="2026-01-17T00:03:42.006664368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:03:42.368928 containerd[1486]: time="2026-01-17T00:03:42.368700325Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:42.370182 containerd[1486]: time="2026-01-17T00:03:42.370046083Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:03:42.370182 containerd[1486]: time="2026-01-17T00:03:42.370127963Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:03:42.370366 kubelet[2574]: E0117 00:03:42.370326 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:03:42.370782 kubelet[2574]: E0117 00:03:42.370382 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:03:42.370782 kubelet[2574]: E0117 00:03:42.370472 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-9c4977545-g698v_calico-system(1edd65d8-b5e4-447f-a4cd-2de7f77232a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:42.373373 containerd[1486]: time="2026-01-17T00:03:42.373330279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:03:42.738237 containerd[1486]: time="2026-01-17T00:03:42.737638314Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:42.740643 containerd[1486]: time="2026-01-17T00:03:42.740569191Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:03:42.740822 containerd[1486]: time="2026-01-17T00:03:42.740719591Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:03:42.740975 kubelet[2574]: E0117 00:03:42.740912 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:03:42.740975 kubelet[2574]: E0117 00:03:42.740962 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:03:42.741059 kubelet[2574]: E0117 00:03:42.741034 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-9c4977545-g698v_calico-system(1edd65d8-b5e4-447f-a4cd-2de7f77232a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:42.741101 kubelet[2574]: E0117 00:03:42.741072 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9c4977545-g698v" podUID="1edd65d8-b5e4-447f-a4cd-2de7f77232a4" Jan 17 00:03:43.023817 containerd[1486]: time="2026-01-17T00:03:43.023483437Z" level=info msg="StopPodSandbox for \"27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df\"" Jan 17 00:03:43.131417 containerd[1486]: 2026-01-17 00:03:43.072 [WARNING][4895] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--089d3b6582-k8s-calico--kube--controllers--7d698fdbf4--vwrcc-eth0", GenerateName:"calico-kube-controllers-7d698fdbf4-", Namespace:"calico-system", SelfLink:"", UID:"a5e03e55-071e-4370-bbe3-a19857cfbfbd", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d698fdbf4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-089d3b6582", ContainerID:"70a4496f4998afc4647af7fcd192e92f6394edbca89adf672be5e00a7fecff46", Pod:"calico-kube-controllers-7d698fdbf4-vwrcc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.83.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif5bd7382ce0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:43.131417 containerd[1486]: 2026-01-17 00:03:43.072 [INFO][4895] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" Jan 17 00:03:43.131417 containerd[1486]: 2026-01-17 00:03:43.072 [INFO][4895] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" iface="eth0" netns="" Jan 17 00:03:43.131417 containerd[1486]: 2026-01-17 00:03:43.072 [INFO][4895] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" Jan 17 00:03:43.131417 containerd[1486]: 2026-01-17 00:03:43.072 [INFO][4895] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" Jan 17 00:03:43.131417 containerd[1486]: 2026-01-17 00:03:43.100 [INFO][4903] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" HandleID="k8s-pod-network.27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--kube--controllers--7d698fdbf4--vwrcc-eth0" Jan 17 00:03:43.131417 containerd[1486]: 2026-01-17 00:03:43.100 [INFO][4903] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:43.131417 containerd[1486]: 2026-01-17 00:03:43.100 [INFO][4903] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:43.131417 containerd[1486]: 2026-01-17 00:03:43.123 [WARNING][4903] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" HandleID="k8s-pod-network.27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--kube--controllers--7d698fdbf4--vwrcc-eth0" Jan 17 00:03:43.131417 containerd[1486]: 2026-01-17 00:03:43.123 [INFO][4903] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" HandleID="k8s-pod-network.27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--kube--controllers--7d698fdbf4--vwrcc-eth0" Jan 17 00:03:43.131417 containerd[1486]: 2026-01-17 00:03:43.126 [INFO][4903] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:43.131417 containerd[1486]: 2026-01-17 00:03:43.129 [INFO][4895] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" Jan 17 00:03:43.132944 containerd[1486]: time="2026-01-17T00:03:43.132339164Z" level=info msg="TearDown network for sandbox \"27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df\" successfully" Jan 17 00:03:43.132944 containerd[1486]: time="2026-01-17T00:03:43.132383844Z" level=info msg="StopPodSandbox for \"27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df\" returns successfully" Jan 17 00:03:43.133412 containerd[1486]: time="2026-01-17T00:03:43.133125243Z" level=info msg="RemovePodSandbox for \"27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df\"" Jan 17 00:03:43.140764 containerd[1486]: time="2026-01-17T00:03:43.140689995Z" level=info msg="Forcibly stopping sandbox \"27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df\"" Jan 17 00:03:43.233591 containerd[1486]: 2026-01-17 00:03:43.189 [WARNING][4917] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--089d3b6582-k8s-calico--kube--controllers--7d698fdbf4--vwrcc-eth0", GenerateName:"calico-kube-controllers-7d698fdbf4-", Namespace:"calico-system", SelfLink:"", UID:"a5e03e55-071e-4370-bbe3-a19857cfbfbd", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d698fdbf4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-089d3b6582", ContainerID:"70a4496f4998afc4647af7fcd192e92f6394edbca89adf672be5e00a7fecff46", Pod:"calico-kube-controllers-7d698fdbf4-vwrcc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.83.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif5bd7382ce0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:43.233591 containerd[1486]: 2026-01-17 00:03:43.190 [INFO][4917] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" Jan 17 00:03:43.233591 containerd[1486]: 2026-01-17 00:03:43.190 [INFO][4917] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" iface="eth0" netns="" Jan 17 00:03:43.233591 containerd[1486]: 2026-01-17 00:03:43.190 [INFO][4917] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" Jan 17 00:03:43.233591 containerd[1486]: 2026-01-17 00:03:43.190 [INFO][4917] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" Jan 17 00:03:43.233591 containerd[1486]: 2026-01-17 00:03:43.214 [INFO][4925] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" HandleID="k8s-pod-network.27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--kube--controllers--7d698fdbf4--vwrcc-eth0" Jan 17 00:03:43.233591 containerd[1486]: 2026-01-17 00:03:43.214 [INFO][4925] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:43.233591 containerd[1486]: 2026-01-17 00:03:43.214 [INFO][4925] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:43.233591 containerd[1486]: 2026-01-17 00:03:43.225 [WARNING][4925] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" HandleID="k8s-pod-network.27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--kube--controllers--7d698fdbf4--vwrcc-eth0" Jan 17 00:03:43.233591 containerd[1486]: 2026-01-17 00:03:43.225 [INFO][4925] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" HandleID="k8s-pod-network.27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--kube--controllers--7d698fdbf4--vwrcc-eth0" Jan 17 00:03:43.233591 containerd[1486]: 2026-01-17 00:03:43.228 [INFO][4925] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:43.233591 containerd[1486]: 2026-01-17 00:03:43.230 [INFO][4917] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df" Jan 17 00:03:43.234373 containerd[1486]: time="2026-01-17T00:03:43.233642338Z" level=info msg="TearDown network for sandbox \"27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df\" successfully" Jan 17 00:03:43.237928 containerd[1486]: time="2026-01-17T00:03:43.237841014Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:03:43.238089 containerd[1486]: time="2026-01-17T00:03:43.237950334Z" level=info msg="RemovePodSandbox \"27a690bf033c76daafcb5a82f379e53af65b45ab338ab00c59712730d464b3df\" returns successfully" Jan 17 00:03:43.239020 containerd[1486]: time="2026-01-17T00:03:43.238812133Z" level=info msg="StopPodSandbox for \"5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b\"" Jan 17 00:03:43.340396 containerd[1486]: 2026-01-17 00:03:43.288 [WARNING][4939] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--v9lnn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4408c45d-c746-4759-bf63-32d8b6b15581", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-089d3b6582", ContainerID:"ccbdd315b3e281d25d4733f8fd25d816a16708e5f29a15c60a2ca5fc1fc2d0dc", Pod:"coredns-66bc5c9577-v9lnn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab675e29e06", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:43.340396 containerd[1486]: 2026-01-17 00:03:43.290 [INFO][4939] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" Jan 17 00:03:43.340396 containerd[1486]: 2026-01-17 00:03:43.290 [INFO][4939] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" iface="eth0" netns="" Jan 17 00:03:43.340396 containerd[1486]: 2026-01-17 00:03:43.290 [INFO][4939] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" Jan 17 00:03:43.340396 containerd[1486]: 2026-01-17 00:03:43.290 [INFO][4939] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" Jan 17 00:03:43.340396 containerd[1486]: 2026-01-17 00:03:43.318 [INFO][4946] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" HandleID="k8s-pod-network.5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" Workload="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--v9lnn-eth0" Jan 17 00:03:43.340396 containerd[1486]: 2026-01-17 00:03:43.319 [INFO][4946] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:43.340396 containerd[1486]: 2026-01-17 00:03:43.319 [INFO][4946] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:43.340396 containerd[1486]: 2026-01-17 00:03:43.331 [WARNING][4946] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" HandleID="k8s-pod-network.5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" Workload="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--v9lnn-eth0" Jan 17 00:03:43.340396 containerd[1486]: 2026-01-17 00:03:43.332 [INFO][4946] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" HandleID="k8s-pod-network.5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" Workload="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--v9lnn-eth0" Jan 17 00:03:43.340396 containerd[1486]: 2026-01-17 00:03:43.334 [INFO][4946] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:43.340396 containerd[1486]: 2026-01-17 00:03:43.337 [INFO][4939] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" Jan 17 00:03:43.341217 containerd[1486]: time="2026-01-17T00:03:43.340446827Z" level=info msg="TearDown network for sandbox \"5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b\" successfully" Jan 17 00:03:43.341217 containerd[1486]: time="2026-01-17T00:03:43.340478867Z" level=info msg="StopPodSandbox for \"5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b\" returns successfully" Jan 17 00:03:43.341217 containerd[1486]: time="2026-01-17T00:03:43.341024786Z" level=info msg="RemovePodSandbox for \"5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b\"" Jan 17 00:03:43.341217 containerd[1486]: time="2026-01-17T00:03:43.341061106Z" level=info msg="Forcibly stopping sandbox \"5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b\"" Jan 17 00:03:43.431541 containerd[1486]: 2026-01-17 00:03:43.381 [WARNING][4961] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--v9lnn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4408c45d-c746-4759-bf63-32d8b6b15581", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-089d3b6582", ContainerID:"ccbdd315b3e281d25d4733f8fd25d816a16708e5f29a15c60a2ca5fc1fc2d0dc", Pod:"coredns-66bc5c9577-v9lnn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab675e29e06", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:43.431541 containerd[1486]: 2026-01-17 00:03:43.381 [INFO][4961] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" Jan 17 00:03:43.431541 containerd[1486]: 2026-01-17 00:03:43.381 [INFO][4961] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" iface="eth0" netns="" Jan 17 00:03:43.431541 containerd[1486]: 2026-01-17 00:03:43.381 [INFO][4961] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" Jan 17 00:03:43.431541 containerd[1486]: 2026-01-17 00:03:43.381 [INFO][4961] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" Jan 17 00:03:43.431541 containerd[1486]: 2026-01-17 00:03:43.404 [INFO][4968] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" HandleID="k8s-pod-network.5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" Workload="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--v9lnn-eth0" Jan 17 00:03:43.431541 containerd[1486]: 2026-01-17 00:03:43.404 [INFO][4968] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:43.431541 containerd[1486]: 2026-01-17 00:03:43.404 [INFO][4968] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:43.431541 containerd[1486]: 2026-01-17 00:03:43.423 [WARNING][4968] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" HandleID="k8s-pod-network.5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" Workload="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--v9lnn-eth0" Jan 17 00:03:43.431541 containerd[1486]: 2026-01-17 00:03:43.423 [INFO][4968] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" HandleID="k8s-pod-network.5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" Workload="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--v9lnn-eth0" Jan 17 00:03:43.431541 containerd[1486]: 2026-01-17 00:03:43.426 [INFO][4968] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:43.431541 containerd[1486]: 2026-01-17 00:03:43.429 [INFO][4961] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b" Jan 17 00:03:43.431541 containerd[1486]: time="2026-01-17T00:03:43.431482132Z" level=info msg="TearDown network for sandbox \"5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b\" successfully" Jan 17 00:03:43.435932 containerd[1486]: time="2026-01-17T00:03:43.435846167Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:03:43.436479 containerd[1486]: time="2026-01-17T00:03:43.435946447Z" level=info msg="RemovePodSandbox \"5d233e9198350df57d2ee125834c69b7f8ceb5b4bc1ebbcabdd51c0bb069225b\" returns successfully" Jan 17 00:03:43.436689 containerd[1486]: time="2026-01-17T00:03:43.436533486Z" level=info msg="StopPodSandbox for \"a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304\"" Jan 17 00:03:43.526712 containerd[1486]: 2026-01-17 00:03:43.478 [WARNING][4982] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--089d3b6582-k8s-csi--node--driver--rctkw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e730921e-fe6a-4325-b721-055844e798ac", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-089d3b6582", ContainerID:"9fa65da12df338e255c26819778c7b867f919d7816a936def2665a155d5bed85", Pod:"csi-node-driver-rctkw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.83.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5802a36a103", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:43.526712 containerd[1486]: 2026-01-17 00:03:43.478 [INFO][4982] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" Jan 17 00:03:43.526712 containerd[1486]: 2026-01-17 00:03:43.478 [INFO][4982] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" iface="eth0" netns="" Jan 17 00:03:43.526712 containerd[1486]: 2026-01-17 00:03:43.478 [INFO][4982] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" Jan 17 00:03:43.526712 containerd[1486]: 2026-01-17 00:03:43.478 [INFO][4982] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" Jan 17 00:03:43.526712 containerd[1486]: 2026-01-17 00:03:43.503 [INFO][4989] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" HandleID="k8s-pod-network.a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" Workload="ci--4081--3--6--n--089d3b6582-k8s-csi--node--driver--rctkw-eth0" Jan 17 00:03:43.526712 containerd[1486]: 2026-01-17 00:03:43.503 [INFO][4989] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:43.526712 containerd[1486]: 2026-01-17 00:03:43.503 [INFO][4989] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:43.526712 containerd[1486]: 2026-01-17 00:03:43.517 [WARNING][4989] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" HandleID="k8s-pod-network.a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" Workload="ci--4081--3--6--n--089d3b6582-k8s-csi--node--driver--rctkw-eth0" Jan 17 00:03:43.526712 containerd[1486]: 2026-01-17 00:03:43.517 [INFO][4989] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" HandleID="k8s-pod-network.a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" Workload="ci--4081--3--6--n--089d3b6582-k8s-csi--node--driver--rctkw-eth0" Jan 17 00:03:43.526712 containerd[1486]: 2026-01-17 00:03:43.520 [INFO][4989] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:43.526712 containerd[1486]: 2026-01-17 00:03:43.522 [INFO][4982] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" Jan 17 00:03:43.526712 containerd[1486]: time="2026-01-17T00:03:43.526310593Z" level=info msg="TearDown network for sandbox \"a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304\" successfully" Jan 17 00:03:43.526712 containerd[1486]: time="2026-01-17T00:03:43.526347553Z" level=info msg="StopPodSandbox for \"a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304\" returns successfully" Jan 17 00:03:43.529551 containerd[1486]: time="2026-01-17T00:03:43.528992950Z" level=info msg="RemovePodSandbox for \"a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304\"" Jan 17 00:03:43.529551 containerd[1486]: time="2026-01-17T00:03:43.529062470Z" level=info msg="Forcibly stopping sandbox \"a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304\"" Jan 17 00:03:43.625192 containerd[1486]: 2026-01-17 00:03:43.572 [WARNING][5004] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--089d3b6582-k8s-csi--node--driver--rctkw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e730921e-fe6a-4325-b721-055844e798ac", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 3, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-089d3b6582", ContainerID:"9fa65da12df338e255c26819778c7b867f919d7816a936def2665a155d5bed85", Pod:"csi-node-driver-rctkw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.83.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5802a36a103", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:43.625192 containerd[1486]: 2026-01-17 00:03:43.573 [INFO][5004] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" Jan 17 00:03:43.625192 containerd[1486]: 2026-01-17 00:03:43.573 [INFO][5004] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" iface="eth0" netns="" Jan 17 00:03:43.625192 containerd[1486]: 2026-01-17 00:03:43.573 [INFO][5004] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" Jan 17 00:03:43.625192 containerd[1486]: 2026-01-17 00:03:43.573 [INFO][5004] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" Jan 17 00:03:43.625192 containerd[1486]: 2026-01-17 00:03:43.597 [INFO][5011] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" HandleID="k8s-pod-network.a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" Workload="ci--4081--3--6--n--089d3b6582-k8s-csi--node--driver--rctkw-eth0" Jan 17 00:03:43.625192 containerd[1486]: 2026-01-17 00:03:43.597 [INFO][5011] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:43.625192 containerd[1486]: 2026-01-17 00:03:43.598 [INFO][5011] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:43.625192 containerd[1486]: 2026-01-17 00:03:43.617 [WARNING][5011] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" HandleID="k8s-pod-network.a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" Workload="ci--4081--3--6--n--089d3b6582-k8s-csi--node--driver--rctkw-eth0" Jan 17 00:03:43.625192 containerd[1486]: 2026-01-17 00:03:43.617 [INFO][5011] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" HandleID="k8s-pod-network.a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" Workload="ci--4081--3--6--n--089d3b6582-k8s-csi--node--driver--rctkw-eth0" Jan 17 00:03:43.625192 containerd[1486]: 2026-01-17 00:03:43.620 [INFO][5011] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:43.625192 containerd[1486]: 2026-01-17 00:03:43.622 [INFO][5004] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304" Jan 17 00:03:43.625192 containerd[1486]: time="2026-01-17T00:03:43.625094970Z" level=info msg="TearDown network for sandbox \"a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304\" successfully" Jan 17 00:03:43.631011 containerd[1486]: time="2026-01-17T00:03:43.630924364Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:03:43.631011 containerd[1486]: time="2026-01-17T00:03:43.631005323Z" level=info msg="RemovePodSandbox \"a76ef09f46457f3607900b50036a6a004db9f7e55f23977ab25250d12bc43304\" returns successfully" Jan 17 00:03:43.631532 containerd[1486]: time="2026-01-17T00:03:43.631490763Z" level=info msg="StopPodSandbox for \"9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7\"" Jan 17 00:03:43.717198 containerd[1486]: 2026-01-17 00:03:43.672 [WARNING][5025] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-whisker--7547b866bf--thdt2-eth0" Jan 17 00:03:43.717198 containerd[1486]: 2026-01-17 00:03:43.672 [INFO][5025] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" Jan 17 00:03:43.717198 containerd[1486]: 2026-01-17 00:03:43.672 [INFO][5025] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" iface="eth0" netns="" Jan 17 00:03:43.717198 containerd[1486]: 2026-01-17 00:03:43.672 [INFO][5025] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" Jan 17 00:03:43.717198 containerd[1486]: 2026-01-17 00:03:43.672 [INFO][5025] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" Jan 17 00:03:43.717198 containerd[1486]: 2026-01-17 00:03:43.698 [INFO][5032] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" HandleID="k8s-pod-network.9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" Workload="ci--4081--3--6--n--089d3b6582-k8s-whisker--7547b866bf--thdt2-eth0" Jan 17 00:03:43.717198 containerd[1486]: 2026-01-17 00:03:43.698 [INFO][5032] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:43.717198 containerd[1486]: 2026-01-17 00:03:43.698 [INFO][5032] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:43.717198 containerd[1486]: 2026-01-17 00:03:43.710 [WARNING][5032] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" HandleID="k8s-pod-network.9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" Workload="ci--4081--3--6--n--089d3b6582-k8s-whisker--7547b866bf--thdt2-eth0" Jan 17 00:03:43.717198 containerd[1486]: 2026-01-17 00:03:43.710 [INFO][5032] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" HandleID="k8s-pod-network.9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" Workload="ci--4081--3--6--n--089d3b6582-k8s-whisker--7547b866bf--thdt2-eth0" Jan 17 00:03:43.717198 containerd[1486]: 2026-01-17 00:03:43.713 [INFO][5032] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:43.717198 containerd[1486]: 2026-01-17 00:03:43.715 [INFO][5025] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" Jan 17 00:03:43.717846 containerd[1486]: time="2026-01-17T00:03:43.717246793Z" level=info msg="TearDown network for sandbox \"9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7\" successfully" Jan 17 00:03:43.717846 containerd[1486]: time="2026-01-17T00:03:43.717278273Z" level=info msg="StopPodSandbox for \"9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7\" returns successfully" Jan 17 00:03:43.718023 containerd[1486]: time="2026-01-17T00:03:43.717948233Z" level=info msg="RemovePodSandbox for \"9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7\"" Jan 17 00:03:43.718023 containerd[1486]: time="2026-01-17T00:03:43.718000433Z" level=info msg="Forcibly stopping sandbox \"9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7\"" Jan 17 00:03:43.799034 containerd[1486]: 2026-01-17 00:03:43.758 [WARNING][5046] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" WorkloadEndpoint="ci--4081--3--6--n--089d3b6582-k8s-whisker--7547b866bf--thdt2-eth0" Jan 17 00:03:43.799034 containerd[1486]: 2026-01-17 00:03:43.758 [INFO][5046] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" Jan 17 00:03:43.799034 containerd[1486]: 2026-01-17 00:03:43.758 [INFO][5046] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" iface="eth0" netns="" Jan 17 00:03:43.799034 containerd[1486]: 2026-01-17 00:03:43.758 [INFO][5046] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" Jan 17 00:03:43.799034 containerd[1486]: 2026-01-17 00:03:43.758 [INFO][5046] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" Jan 17 00:03:43.799034 containerd[1486]: 2026-01-17 00:03:43.780 [INFO][5053] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" HandleID="k8s-pod-network.9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" Workload="ci--4081--3--6--n--089d3b6582-k8s-whisker--7547b866bf--thdt2-eth0" Jan 17 00:03:43.799034 containerd[1486]: 2026-01-17 00:03:43.780 [INFO][5053] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:43.799034 containerd[1486]: 2026-01-17 00:03:43.780 [INFO][5053] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:43.799034 containerd[1486]: 2026-01-17 00:03:43.790 [WARNING][5053] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" HandleID="k8s-pod-network.9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" Workload="ci--4081--3--6--n--089d3b6582-k8s-whisker--7547b866bf--thdt2-eth0" Jan 17 00:03:43.799034 containerd[1486]: 2026-01-17 00:03:43.791 [INFO][5053] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" HandleID="k8s-pod-network.9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" Workload="ci--4081--3--6--n--089d3b6582-k8s-whisker--7547b866bf--thdt2-eth0" Jan 17 00:03:43.799034 containerd[1486]: 2026-01-17 00:03:43.793 [INFO][5053] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:43.799034 containerd[1486]: 2026-01-17 00:03:43.795 [INFO][5046] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7" Jan 17 00:03:43.799034 containerd[1486]: time="2026-01-17T00:03:43.798129189Z" level=info msg="TearDown network for sandbox \"9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7\" successfully" Jan 17 00:03:43.813213 containerd[1486]: time="2026-01-17T00:03:43.813138653Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:03:43.813376 containerd[1486]: time="2026-01-17T00:03:43.813250973Z" level=info msg="RemovePodSandbox \"9feb035e01873d554e00b29e6436fb46d02fdace8a4de1d53e6321f8f7683fd7\" returns successfully" Jan 17 00:03:43.814538 containerd[1486]: time="2026-01-17T00:03:43.814192012Z" level=info msg="StopPodSandbox for \"9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f\"" Jan 17 00:03:43.901802 containerd[1486]: 2026-01-17 00:03:43.855 [WARNING][5067] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--6ghq5-eth0", GenerateName:"calico-apiserver-798d7c56dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"3a9d9fee-4b98-43fe-862d-a1e26e86f2ee", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"798d7c56dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-089d3b6582", ContainerID:"bf0b625eb76ca0e6e7c4d4e3c8e3edae64889f8b9a235ba45d15ba900fc35738", Pod:"calico-apiserver-798d7c56dc-6ghq5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif46e10d7769", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:43.901802 containerd[1486]: 2026-01-17 00:03:43.856 [INFO][5067] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" Jan 17 00:03:43.901802 containerd[1486]: 2026-01-17 00:03:43.856 [INFO][5067] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" iface="eth0" netns="" Jan 17 00:03:43.901802 containerd[1486]: 2026-01-17 00:03:43.856 [INFO][5067] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" Jan 17 00:03:43.901802 containerd[1486]: 2026-01-17 00:03:43.856 [INFO][5067] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" Jan 17 00:03:43.901802 containerd[1486]: 2026-01-17 00:03:43.881 [INFO][5074] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" HandleID="k8s-pod-network.9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--6ghq5-eth0" Jan 17 00:03:43.901802 containerd[1486]: 2026-01-17 00:03:43.881 [INFO][5074] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:43.901802 containerd[1486]: 2026-01-17 00:03:43.881 [INFO][5074] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:43.901802 containerd[1486]: 2026-01-17 00:03:43.895 [WARNING][5074] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" HandleID="k8s-pod-network.9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--6ghq5-eth0" Jan 17 00:03:43.901802 containerd[1486]: 2026-01-17 00:03:43.895 [INFO][5074] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" HandleID="k8s-pod-network.9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--6ghq5-eth0" Jan 17 00:03:43.901802 containerd[1486]: 2026-01-17 00:03:43.897 [INFO][5074] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:43.901802 containerd[1486]: 2026-01-17 00:03:43.899 [INFO][5067] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" Jan 17 00:03:43.901802 containerd[1486]: time="2026-01-17T00:03:43.901758641Z" level=info msg="TearDown network for sandbox \"9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f\" successfully" Jan 17 00:03:43.901802 containerd[1486]: time="2026-01-17T00:03:43.901799881Z" level=info msg="StopPodSandbox for \"9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f\" returns successfully" Jan 17 00:03:43.904364 containerd[1486]: time="2026-01-17T00:03:43.903530319Z" level=info msg="RemovePodSandbox for \"9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f\"" Jan 17 00:03:43.904364 containerd[1486]: time="2026-01-17T00:03:43.903583999Z" level=info msg="Forcibly stopping sandbox \"9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f\"" Jan 17 00:03:43.999552 containerd[1486]: 2026-01-17 00:03:43.948 [WARNING][5088] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--6ghq5-eth0", GenerateName:"calico-apiserver-798d7c56dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"3a9d9fee-4b98-43fe-862d-a1e26e86f2ee", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"798d7c56dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-089d3b6582", ContainerID:"bf0b625eb76ca0e6e7c4d4e3c8e3edae64889f8b9a235ba45d15ba900fc35738", Pod:"calico-apiserver-798d7c56dc-6ghq5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif46e10d7769", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:43.999552 containerd[1486]: 2026-01-17 00:03:43.948 [INFO][5088] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" Jan 17 00:03:43.999552 containerd[1486]: 2026-01-17 00:03:43.949 [INFO][5088] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" iface="eth0" netns="" Jan 17 00:03:43.999552 containerd[1486]: 2026-01-17 00:03:43.949 [INFO][5088] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" Jan 17 00:03:43.999552 containerd[1486]: 2026-01-17 00:03:43.949 [INFO][5088] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" Jan 17 00:03:43.999552 containerd[1486]: 2026-01-17 00:03:43.974 [INFO][5095] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" HandleID="k8s-pod-network.9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--6ghq5-eth0" Jan 17 00:03:43.999552 containerd[1486]: 2026-01-17 00:03:43.974 [INFO][5095] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:43.999552 containerd[1486]: 2026-01-17 00:03:43.974 [INFO][5095] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:43.999552 containerd[1486]: 2026-01-17 00:03:43.991 [WARNING][5095] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" HandleID="k8s-pod-network.9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--6ghq5-eth0" Jan 17 00:03:43.999552 containerd[1486]: 2026-01-17 00:03:43.991 [INFO][5095] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" HandleID="k8s-pod-network.9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--6ghq5-eth0" Jan 17 00:03:43.999552 containerd[1486]: 2026-01-17 00:03:43.993 [INFO][5095] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:43.999552 containerd[1486]: 2026-01-17 00:03:43.996 [INFO][5088] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f" Jan 17 00:03:43.999552 containerd[1486]: time="2026-01-17T00:03:43.998671060Z" level=info msg="TearDown network for sandbox \"9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f\" successfully" Jan 17 00:03:44.003951 containerd[1486]: time="2026-01-17T00:03:44.003844855Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:03:44.004650 containerd[1486]: time="2026-01-17T00:03:44.003997774Z" level=info msg="RemovePodSandbox \"9063894cbf1965633c8a9745615747b07734c9c198d2ee9f6c6c9fbeac1a881f\" returns successfully" Jan 17 00:03:44.005608 containerd[1486]: time="2026-01-17T00:03:44.004999413Z" level=info msg="StopPodSandbox for \"8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9\"" Jan 17 00:03:44.102202 containerd[1486]: 2026-01-17 00:03:44.054 [WARNING][5109] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--ghv47-eth0", GenerateName:"calico-apiserver-798d7c56dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"e2865d0a-d4d2-402d-89fc-69d90c7c76b9", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"798d7c56dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-089d3b6582", ContainerID:"743163d9593f30b61614d5fee8e8850ebf304e0a35aec9d150fa94c58ac85d90", Pod:"calico-apiserver-798d7c56dc-ghv47", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid3059b58743", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:44.102202 containerd[1486]: 2026-01-17 00:03:44.054 [INFO][5109] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" Jan 17 00:03:44.102202 containerd[1486]: 2026-01-17 00:03:44.054 [INFO][5109] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" iface="eth0" netns="" Jan 17 00:03:44.102202 containerd[1486]: 2026-01-17 00:03:44.054 [INFO][5109] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" Jan 17 00:03:44.102202 containerd[1486]: 2026-01-17 00:03:44.054 [INFO][5109] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" Jan 17 00:03:44.102202 containerd[1486]: 2026-01-17 00:03:44.082 [INFO][5116] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" HandleID="k8s-pod-network.8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--ghv47-eth0" Jan 17 00:03:44.102202 containerd[1486]: 2026-01-17 00:03:44.083 [INFO][5116] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:44.102202 containerd[1486]: 2026-01-17 00:03:44.083 [INFO][5116] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:44.102202 containerd[1486]: 2026-01-17 00:03:44.093 [WARNING][5116] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" HandleID="k8s-pod-network.8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--ghv47-eth0" Jan 17 00:03:44.102202 containerd[1486]: 2026-01-17 00:03:44.093 [INFO][5116] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" HandleID="k8s-pod-network.8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--ghv47-eth0" Jan 17 00:03:44.102202 containerd[1486]: 2026-01-17 00:03:44.096 [INFO][5116] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:44.102202 containerd[1486]: 2026-01-17 00:03:44.099 [INFO][5109] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" Jan 17 00:03:44.103692 containerd[1486]: time="2026-01-17T00:03:44.102228198Z" level=info msg="TearDown network for sandbox \"8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9\" successfully" Jan 17 00:03:44.103692 containerd[1486]: time="2026-01-17T00:03:44.102257638Z" level=info msg="StopPodSandbox for \"8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9\" returns successfully" Jan 17 00:03:44.103692 containerd[1486]: time="2026-01-17T00:03:44.102918878Z" level=info msg="RemovePodSandbox for \"8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9\"" Jan 17 00:03:44.103692 containerd[1486]: time="2026-01-17T00:03:44.102959318Z" level=info msg="Forcibly stopping sandbox \"8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9\"" Jan 17 00:03:44.188794 containerd[1486]: 2026-01-17 00:03:44.145 [WARNING][5130] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--ghv47-eth0", GenerateName:"calico-apiserver-798d7c56dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"e2865d0a-d4d2-402d-89fc-69d90c7c76b9", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"798d7c56dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-089d3b6582", ContainerID:"743163d9593f30b61614d5fee8e8850ebf304e0a35aec9d150fa94c58ac85d90", Pod:"calico-apiserver-798d7c56dc-ghv47", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid3059b58743", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:44.188794 containerd[1486]: 2026-01-17 00:03:44.145 [INFO][5130] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" Jan 17 00:03:44.188794 containerd[1486]: 2026-01-17 00:03:44.145 [INFO][5130] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" iface="eth0" netns="" Jan 17 00:03:44.188794 containerd[1486]: 2026-01-17 00:03:44.145 [INFO][5130] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" Jan 17 00:03:44.188794 containerd[1486]: 2026-01-17 00:03:44.145 [INFO][5130] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" Jan 17 00:03:44.188794 containerd[1486]: 2026-01-17 00:03:44.168 [INFO][5137] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" HandleID="k8s-pod-network.8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--ghv47-eth0" Jan 17 00:03:44.188794 containerd[1486]: 2026-01-17 00:03:44.169 [INFO][5137] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:44.188794 containerd[1486]: 2026-01-17 00:03:44.169 [INFO][5137] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:44.188794 containerd[1486]: 2026-01-17 00:03:44.180 [WARNING][5137] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" HandleID="k8s-pod-network.8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--ghv47-eth0" Jan 17 00:03:44.188794 containerd[1486]: 2026-01-17 00:03:44.180 [INFO][5137] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" HandleID="k8s-pod-network.8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" Workload="ci--4081--3--6--n--089d3b6582-k8s-calico--apiserver--798d7c56dc--ghv47-eth0" Jan 17 00:03:44.188794 containerd[1486]: 2026-01-17 00:03:44.183 [INFO][5137] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:44.188794 containerd[1486]: 2026-01-17 00:03:44.185 [INFO][5130] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9" Jan 17 00:03:44.188794 containerd[1486]: time="2026-01-17T00:03:44.188756314Z" level=info msg="TearDown network for sandbox \"8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9\" successfully" Jan 17 00:03:44.197090 containerd[1486]: time="2026-01-17T00:03:44.197025586Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:03:44.197429 containerd[1486]: time="2026-01-17T00:03:44.197227305Z" level=info msg="RemovePodSandbox \"8055a463ec2a1e135f19b2b30d905eebae20380735b1ec6f4a665c1b72d922b9\" returns successfully" Jan 17 00:03:44.198213 containerd[1486]: time="2026-01-17T00:03:44.198107305Z" level=info msg="StopPodSandbox for \"ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564\"" Jan 17 00:03:44.282041 containerd[1486]: 2026-01-17 00:03:44.243 [WARNING][5151] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--nkbkp-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4cd10ffc-3ab7-4de4-a249-8f0e6fd50b38", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-089d3b6582", ContainerID:"74066e44a8d02c44107d8fc396c6160fcd9706534acadaf0d908fa6cdeb46298", Pod:"coredns-66bc5c9577-nkbkp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84b86588260", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:44.282041 containerd[1486]: 2026-01-17 00:03:44.243 [INFO][5151] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" Jan 17 00:03:44.282041 containerd[1486]: 2026-01-17 00:03:44.243 [INFO][5151] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" iface="eth0" netns="" Jan 17 00:03:44.282041 containerd[1486]: 2026-01-17 00:03:44.243 [INFO][5151] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" Jan 17 00:03:44.282041 containerd[1486]: 2026-01-17 00:03:44.243 [INFO][5151] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" Jan 17 00:03:44.282041 containerd[1486]: 2026-01-17 00:03:44.264 [INFO][5158] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" HandleID="k8s-pod-network.ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" Workload="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--nkbkp-eth0" Jan 17 00:03:44.282041 containerd[1486]: 2026-01-17 00:03:44.264 [INFO][5158] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:44.282041 containerd[1486]: 2026-01-17 00:03:44.264 [INFO][5158] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:44.282041 containerd[1486]: 2026-01-17 00:03:44.275 [WARNING][5158] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" HandleID="k8s-pod-network.ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" Workload="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--nkbkp-eth0" Jan 17 00:03:44.282041 containerd[1486]: 2026-01-17 00:03:44.275 [INFO][5158] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" HandleID="k8s-pod-network.ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" Workload="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--nkbkp-eth0" Jan 17 00:03:44.282041 containerd[1486]: 2026-01-17 00:03:44.277 [INFO][5158] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:44.282041 containerd[1486]: 2026-01-17 00:03:44.279 [INFO][5151] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" Jan 17 00:03:44.282886 containerd[1486]: time="2026-01-17T00:03:44.282060062Z" level=info msg="TearDown network for sandbox \"ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564\" successfully" Jan 17 00:03:44.282886 containerd[1486]: time="2026-01-17T00:03:44.282084942Z" level=info msg="StopPodSandbox for \"ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564\" returns successfully" Jan 17 00:03:44.283499 containerd[1486]: time="2026-01-17T00:03:44.283467941Z" level=info msg="RemovePodSandbox for \"ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564\"" Jan 17 00:03:44.283823 containerd[1486]: time="2026-01-17T00:03:44.283622901Z" level=info msg="Forcibly stopping sandbox \"ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564\"" Jan 17 00:03:44.369808 containerd[1486]: 2026-01-17 00:03:44.329 [WARNING][5172] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--nkbkp-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"4cd10ffc-3ab7-4de4-a249-8f0e6fd50b38", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-089d3b6582", ContainerID:"74066e44a8d02c44107d8fc396c6160fcd9706534acadaf0d908fa6cdeb46298", Pod:"coredns-66bc5c9577-nkbkp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84b86588260", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:44.369808 containerd[1486]: 2026-01-17 00:03:44.329 [INFO][5172] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" Jan 17 00:03:44.369808 containerd[1486]: 2026-01-17 00:03:44.329 [INFO][5172] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" iface="eth0" netns="" Jan 17 00:03:44.369808 containerd[1486]: 2026-01-17 00:03:44.329 [INFO][5172] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" Jan 17 00:03:44.369808 containerd[1486]: 2026-01-17 00:03:44.329 [INFO][5172] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" Jan 17 00:03:44.369808 containerd[1486]: 2026-01-17 00:03:44.350 [INFO][5179] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" HandleID="k8s-pod-network.ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" Workload="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--nkbkp-eth0" Jan 17 00:03:44.369808 containerd[1486]: 2026-01-17 00:03:44.350 [INFO][5179] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:44.369808 containerd[1486]: 2026-01-17 00:03:44.350 [INFO][5179] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:44.369808 containerd[1486]: 2026-01-17 00:03:44.362 [WARNING][5179] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" HandleID="k8s-pod-network.ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" Workload="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--nkbkp-eth0" Jan 17 00:03:44.369808 containerd[1486]: 2026-01-17 00:03:44.362 [INFO][5179] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" HandleID="k8s-pod-network.ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" Workload="ci--4081--3--6--n--089d3b6582-k8s-coredns--66bc5c9577--nkbkp-eth0" Jan 17 00:03:44.369808 containerd[1486]: 2026-01-17 00:03:44.365 [INFO][5179] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:44.369808 containerd[1486]: 2026-01-17 00:03:44.367 [INFO][5172] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564" Jan 17 00:03:44.370288 containerd[1486]: time="2026-01-17T00:03:44.369856137Z" level=info msg="TearDown network for sandbox \"ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564\" successfully" Jan 17 00:03:44.375210 containerd[1486]: time="2026-01-17T00:03:44.375136251Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:03:44.375367 containerd[1486]: time="2026-01-17T00:03:44.375221651Z" level=info msg="RemovePodSandbox \"ad5572b32a8d6e4a9455491049ba2393f0a99fb5f98bb00a81e7ac0bdd7d9564\" returns successfully" Jan 17 00:03:44.376116 containerd[1486]: time="2026-01-17T00:03:44.376083850Z" level=info msg="StopPodSandbox for \"ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37\"" Jan 17 00:03:44.466148 containerd[1486]: 2026-01-17 00:03:44.424 [WARNING][5193] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--089d3b6582-k8s-goldmane--7c778bb748--txw7d-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"362a3452-c30b-406b-9bbb-9543b4b09e90", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 3, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-089d3b6582", ContainerID:"0e9fda153a2fb9320e7618f6ec36e87ccd42849fd85ae188c9bea5abf2413f4a", Pod:"goldmane-7c778bb748-txw7d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.83.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie9449b07005", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:44.466148 containerd[1486]: 2026-01-17 00:03:44.425 [INFO][5193] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" Jan 17 00:03:44.466148 containerd[1486]: 2026-01-17 00:03:44.425 [INFO][5193] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" iface="eth0" netns="" Jan 17 00:03:44.466148 containerd[1486]: 2026-01-17 00:03:44.425 [INFO][5193] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" Jan 17 00:03:44.466148 containerd[1486]: 2026-01-17 00:03:44.425 [INFO][5193] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" Jan 17 00:03:44.466148 containerd[1486]: 2026-01-17 00:03:44.449 [INFO][5200] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" HandleID="k8s-pod-network.ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" Workload="ci--4081--3--6--n--089d3b6582-k8s-goldmane--7c778bb748--txw7d-eth0" Jan 17 00:03:44.466148 containerd[1486]: 2026-01-17 00:03:44.449 [INFO][5200] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:44.466148 containerd[1486]: 2026-01-17 00:03:44.449 [INFO][5200] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:44.466148 containerd[1486]: 2026-01-17 00:03:44.459 [WARNING][5200] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" HandleID="k8s-pod-network.ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" Workload="ci--4081--3--6--n--089d3b6582-k8s-goldmane--7c778bb748--txw7d-eth0" Jan 17 00:03:44.466148 containerd[1486]: 2026-01-17 00:03:44.459 [INFO][5200] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" HandleID="k8s-pod-network.ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" Workload="ci--4081--3--6--n--089d3b6582-k8s-goldmane--7c778bb748--txw7d-eth0" Jan 17 00:03:44.466148 containerd[1486]: 2026-01-17 00:03:44.462 [INFO][5200] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:44.466148 containerd[1486]: 2026-01-17 00:03:44.463 [INFO][5193] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" Jan 17 00:03:44.466569 containerd[1486]: time="2026-01-17T00:03:44.466126002Z" level=info msg="TearDown network for sandbox \"ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37\" successfully" Jan 17 00:03:44.466569 containerd[1486]: time="2026-01-17T00:03:44.466166682Z" level=info msg="StopPodSandbox for \"ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37\" returns successfully" Jan 17 00:03:44.467833 containerd[1486]: time="2026-01-17T00:03:44.467632481Z" level=info msg="RemovePodSandbox for \"ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37\"" Jan 17 00:03:44.467925 containerd[1486]: time="2026-01-17T00:03:44.467853921Z" level=info msg="Forcibly stopping sandbox \"ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37\"" Jan 17 00:03:44.559325 containerd[1486]: 2026-01-17 00:03:44.514 [WARNING][5214] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--089d3b6582-k8s-goldmane--7c778bb748--txw7d-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"362a3452-c30b-406b-9bbb-9543b4b09e90", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 3, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-089d3b6582", ContainerID:"0e9fda153a2fb9320e7618f6ec36e87ccd42849fd85ae188c9bea5abf2413f4a", Pod:"goldmane-7c778bb748-txw7d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.83.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie9449b07005", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:44.559325 containerd[1486]: 2026-01-17 00:03:44.516 [INFO][5214] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" Jan 17 00:03:44.559325 containerd[1486]: 2026-01-17 00:03:44.516 [INFO][5214] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" iface="eth0" netns="" Jan 17 00:03:44.559325 containerd[1486]: 2026-01-17 00:03:44.516 [INFO][5214] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" Jan 17 00:03:44.559325 containerd[1486]: 2026-01-17 00:03:44.516 [INFO][5214] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" Jan 17 00:03:44.559325 containerd[1486]: 2026-01-17 00:03:44.538 [INFO][5221] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" HandleID="k8s-pod-network.ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" Workload="ci--4081--3--6--n--089d3b6582-k8s-goldmane--7c778bb748--txw7d-eth0" Jan 17 00:03:44.559325 containerd[1486]: 2026-01-17 00:03:44.538 [INFO][5221] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:44.559325 containerd[1486]: 2026-01-17 00:03:44.538 [INFO][5221] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:44.559325 containerd[1486]: 2026-01-17 00:03:44.550 [WARNING][5221] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" HandleID="k8s-pod-network.ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" Workload="ci--4081--3--6--n--089d3b6582-k8s-goldmane--7c778bb748--txw7d-eth0" Jan 17 00:03:44.559325 containerd[1486]: 2026-01-17 00:03:44.550 [INFO][5221] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" HandleID="k8s-pod-network.ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" Workload="ci--4081--3--6--n--089d3b6582-k8s-goldmane--7c778bb748--txw7d-eth0" Jan 17 00:03:44.559325 containerd[1486]: 2026-01-17 00:03:44.553 [INFO][5221] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:44.559325 containerd[1486]: 2026-01-17 00:03:44.555 [INFO][5214] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37" Jan 17 00:03:44.559851 containerd[1486]: time="2026-01-17T00:03:44.559377271Z" level=info msg="TearDown network for sandbox \"ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37\" successfully" Jan 17 00:03:44.572555 containerd[1486]: time="2026-01-17T00:03:44.572414458Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:03:44.573005 containerd[1486]: time="2026-01-17T00:03:44.572678458Z" level=info msg="RemovePodSandbox \"ed05ca789eeaf7365350956f6218dd6131078d8a6cab994310878e8c8c57ed37\" returns successfully" Jan 17 00:03:45.008842 containerd[1486]: time="2026-01-17T00:03:45.008569672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:03:45.349943 containerd[1486]: time="2026-01-17T00:03:45.349669239Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:45.351608 containerd[1486]: time="2026-01-17T00:03:45.351541238Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:03:45.351866 containerd[1486]: time="2026-01-17T00:03:45.351659317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:03:45.351977 kubelet[2574]: E0117 00:03:45.351804 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:03:45.351977 kubelet[2574]: E0117 00:03:45.351857 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:03:45.352616 kubelet[2574]: E0117 00:03:45.352027 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-798d7c56dc-ghv47_calico-apiserver(e2865d0a-d4d2-402d-89fc-69d90c7c76b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:45.352727 kubelet[2574]: E0117 00:03:45.352621 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798d7c56dc-ghv47" podUID="e2865d0a-d4d2-402d-89fc-69d90c7c76b9" Jan 17 00:03:46.006024 containerd[1486]: time="2026-01-17T00:03:46.005596918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:03:46.344408 containerd[1486]: time="2026-01-17T00:03:46.344170987Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:46.346359 containerd[1486]: time="2026-01-17T00:03:46.346124385Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:03:46.346359 containerd[1486]: time="2026-01-17T00:03:46.346292345Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:03:46.348749 kubelet[2574]: E0117 00:03:46.346973 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:03:46.348749 kubelet[2574]: E0117 00:03:46.347034 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:03:46.348749 kubelet[2574]: E0117 00:03:46.347175 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-798d7c56dc-6ghq5_calico-apiserver(3a9d9fee-4b98-43fe-862d-a1e26e86f2ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:46.348749 kubelet[2574]: E0117 00:03:46.347225 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798d7c56dc-6ghq5" podUID="3a9d9fee-4b98-43fe-862d-a1e26e86f2ee" Jan 17 00:03:48.004955 containerd[1486]: time="2026-01-17T00:03:48.004485453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:03:48.374945 containerd[1486]: time="2026-01-17T00:03:48.374807294Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:48.376788 containerd[1486]: time="2026-01-17T00:03:48.376707332Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:03:48.377133 containerd[1486]: time="2026-01-17T00:03:48.376920652Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:03:48.378679 kubelet[2574]: E0117 00:03:48.377147 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:03:48.378679 kubelet[2574]: E0117 00:03:48.377220 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:03:48.378679 kubelet[2574]: E0117 00:03:48.377342 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-rctkw_calico-system(e730921e-fe6a-4325-b721-055844e798ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:48.380329 containerd[1486]: time="2026-01-17T00:03:48.379979050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:03:48.729625 containerd[1486]: time="2026-01-17T00:03:48.729420786Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:48.730995 containerd[1486]: time="2026-01-17T00:03:48.730848704Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:03:48.730995 containerd[1486]: time="2026-01-17T00:03:48.730951064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:03:48.731230 kubelet[2574]: E0117 00:03:48.731172 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:03:48.731753 kubelet[2574]: E0117 00:03:48.731237 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:03:48.731753 kubelet[2574]: E0117 00:03:48.731337 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-rctkw_calico-system(e730921e-fe6a-4325-b721-055844e798ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:48.731753 kubelet[2574]: E0117 00:03:48.731394 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rctkw" podUID="e730921e-fe6a-4325-b721-055844e798ac" Jan 17 00:03:49.008399 containerd[1486]: time="2026-01-17T00:03:49.007992375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:03:49.347639 containerd[1486]: time="2026-01-17T00:03:49.347389855Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:49.349833 containerd[1486]: time="2026-01-17T00:03:49.349696453Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:03:49.349833 containerd[1486]: time="2026-01-17T00:03:49.349757533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:03:49.350112 kubelet[2574]: E0117 00:03:49.350057 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:03:49.350182 kubelet[2574]: E0117 00:03:49.350116 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:03:49.350219 kubelet[2574]: E0117 00:03:49.350206 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-txw7d_calico-system(362a3452-c30b-406b-9bbb-9543b4b09e90): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:49.350279 kubelet[2574]: E0117 00:03:49.350244 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-txw7d" podUID="362a3452-c30b-406b-9bbb-9543b4b09e90" Jan 17 00:03:52.005285 containerd[1486]: time="2026-01-17T00:03:52.004781136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:03:52.347141 containerd[1486]: time="2026-01-17T00:03:52.347007379Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:52.348823 containerd[1486]: time="2026-01-17T00:03:52.348722459Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:03:52.348997 containerd[1486]: time="2026-01-17T00:03:52.348957099Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:03:52.349572 kubelet[2574]: E0117 00:03:52.349219 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:03:52.349572 kubelet[2574]: E0117 00:03:52.349297 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:03:52.349572 kubelet[2574]: E0117 00:03:52.349398 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7d698fdbf4-vwrcc_calico-system(a5e03e55-071e-4370-bbe3-a19857cfbfbd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:52.349572 kubelet[2574]: E0117 00:03:52.349454 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d698fdbf4-vwrcc" podUID="a5e03e55-071e-4370-bbe3-a19857cfbfbd" Jan 17 00:03:56.009551 kubelet[2574]: E0117 00:03:56.009097 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9c4977545-g698v" podUID="1edd65d8-b5e4-447f-a4cd-2de7f77232a4" Jan 17 00:03:59.007088 kubelet[2574]: E0117 00:03:59.006969 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798d7c56dc-ghv47" podUID="e2865d0a-d4d2-402d-89fc-69d90c7c76b9" Jan 17 00:04:01.005123 kubelet[2574]: E0117 00:04:01.005038 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798d7c56dc-6ghq5" podUID="3a9d9fee-4b98-43fe-862d-a1e26e86f2ee" Jan 17 00:04:02.009768 kubelet[2574]: E0117 00:04:02.008814 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rctkw" podUID="e730921e-fe6a-4325-b721-055844e798ac" Jan 17 00:04:03.009485 kubelet[2574]: E0117 00:04:03.009430 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-txw7d" podUID="362a3452-c30b-406b-9bbb-9543b4b09e90" Jan 17 00:04:04.006651 kubelet[2574]: E0117 00:04:04.006593 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d698fdbf4-vwrcc" podUID="a5e03e55-071e-4370-bbe3-a19857cfbfbd" Jan 17 00:04:10.007024 containerd[1486]: time="2026-01-17T00:04:10.006964929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:04:10.350472 containerd[1486]: time="2026-01-17T00:04:10.350420146Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:04:10.351997 containerd[1486]: time="2026-01-17T00:04:10.351937466Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:04:10.352115 containerd[1486]: time="2026-01-17T00:04:10.352040986Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:04:10.352912 kubelet[2574]: E0117 00:04:10.352297 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:04:10.352912 kubelet[2574]: E0117 00:04:10.352345 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:04:10.352912 kubelet[2574]: E0117 00:04:10.352415 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-9c4977545-g698v_calico-system(1edd65d8-b5e4-447f-a4cd-2de7f77232a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:04:10.355436 containerd[1486]: time="2026-01-17T00:04:10.355202426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:04:10.690025 containerd[1486]: time="2026-01-17T00:04:10.689798724Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:04:10.691955 containerd[1486]: time="2026-01-17T00:04:10.691892923Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:04:10.692102 containerd[1486]: time="2026-01-17T00:04:10.692028443Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:04:10.692534 kubelet[2574]: E0117 00:04:10.692366 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:04:10.692534 kubelet[2574]: E0117 00:04:10.692418 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:04:10.693867 kubelet[2574]: E0117 00:04:10.692497 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-9c4977545-g698v_calico-system(1edd65d8-b5e4-447f-a4cd-2de7f77232a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:04:10.693867 kubelet[2574]: E0117 00:04:10.692953 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9c4977545-g698v" podUID="1edd65d8-b5e4-447f-a4cd-2de7f77232a4" Jan 17 00:04:11.363880 systemd[1]: Started sshd@8-167.235.246.183:22-185.246.128.171:52395.service - OpenSSH per-connection server daemon (185.246.128.171:52395). Jan 17 00:04:13.012027 containerd[1486]: time="2026-01-17T00:04:13.011979294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:04:13.351498 containerd[1486]: time="2026-01-17T00:04:13.351422433Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:04:13.353808 containerd[1486]: time="2026-01-17T00:04:13.353751112Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:04:13.353808 containerd[1486]: time="2026-01-17T00:04:13.353771872Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:04:13.355240 kubelet[2574]: E0117 00:04:13.354440 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:04:13.355240 kubelet[2574]: E0117 00:04:13.354495 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:04:13.355240 kubelet[2574]: E0117 00:04:13.354742 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-798d7c56dc-6ghq5_calico-apiserver(3a9d9fee-4b98-43fe-862d-a1e26e86f2ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:04:13.355240 kubelet[2574]: E0117 00:04:13.354779 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798d7c56dc-6ghq5" podUID="3a9d9fee-4b98-43fe-862d-a1e26e86f2ee" Jan 17 00:04:13.356445 containerd[1486]: time="2026-01-17T00:04:13.356065272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:04:13.694798 containerd[1486]: time="2026-01-17T00:04:13.694544451Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:04:13.697760 containerd[1486]: time="2026-01-17T00:04:13.697593251Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:04:13.697760 containerd[1486]: time="2026-01-17T00:04:13.697629291Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:04:13.698696 kubelet[2574]: E0117 00:04:13.698127 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:04:13.698696 kubelet[2574]: E0117 00:04:13.698182 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:04:13.698696 kubelet[2574]: E0117 00:04:13.698279 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-798d7c56dc-ghv47_calico-apiserver(e2865d0a-d4d2-402d-89fc-69d90c7c76b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:04:13.698906 kubelet[2574]: E0117 00:04:13.698313 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798d7c56dc-ghv47" podUID="e2865d0a-d4d2-402d-89fc-69d90c7c76b9" Jan 17 00:04:13.937088 sshd[5272]: Invalid user 0 from 185.246.128.171 port 52395 Jan 17 00:04:15.006681 containerd[1486]: time="2026-01-17T00:04:15.006608132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:04:15.361440 containerd[1486]: time="2026-01-17T00:04:15.361252151Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:04:15.363108 containerd[1486]: time="2026-01-17T00:04:15.362952991Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:04:15.363246 containerd[1486]: time="2026-01-17T00:04:15.363011951Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:04:15.363556 kubelet[2574]: E0117 00:04:15.363503 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:04:15.363959 kubelet[2574]: E0117 00:04:15.363566 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:04:15.363959 kubelet[2574]: E0117 00:04:15.363643 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-rctkw_calico-system(e730921e-fe6a-4325-b721-055844e798ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:04:15.365384 containerd[1486]: time="2026-01-17T00:04:15.365132191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:04:15.702145 containerd[1486]: time="2026-01-17T00:04:15.701961691Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:04:15.703659 containerd[1486]: time="2026-01-17T00:04:15.703597331Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:04:15.703927 containerd[1486]: time="2026-01-17T00:04:15.703897651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:04:15.704086 kubelet[2574]: E0117 00:04:15.704045 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:04:15.704181 kubelet[2574]: E0117 00:04:15.704099 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:04:15.704319 kubelet[2574]: E0117 00:04:15.704189 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-rctkw_calico-system(e730921e-fe6a-4325-b721-055844e798ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:04:15.704319 kubelet[2574]: E0117 00:04:15.704238 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rctkw" podUID="e730921e-fe6a-4325-b721-055844e798ac" Jan 17 00:04:16.021037 sshd[5272]: Disconnecting invalid user 0 185.246.128.171 port 52395: Change of username or service not allowed: (0,ssh-connection) -> (pi,ssh-connection) [preauth] Jan 17 00:04:16.025654 systemd[1]: sshd@8-167.235.246.183:22-185.246.128.171:52395.service: Deactivated successfully. Jan 17 00:04:16.977950 systemd[1]: Started sshd@9-167.235.246.183:22-185.246.128.171:40304.service - OpenSSH per-connection server daemon (185.246.128.171:40304). Jan 17 00:04:17.007324 containerd[1486]: time="2026-01-17T00:04:17.007246096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:04:17.351067 containerd[1486]: time="2026-01-17T00:04:17.350918997Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:04:17.352495 containerd[1486]: time="2026-01-17T00:04:17.352421916Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:04:17.352818 containerd[1486]: time="2026-01-17T00:04:17.352582596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:04:17.352958 kubelet[2574]: E0117 00:04:17.352759 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:04:17.352958 kubelet[2574]: E0117 00:04:17.352812 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:04:17.352958 kubelet[2574]: E0117 00:04:17.352941 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-txw7d_calico-system(362a3452-c30b-406b-9bbb-9543b4b09e90): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:04:17.354329 kubelet[2574]: E0117 00:04:17.352987 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-txw7d" podUID="362a3452-c30b-406b-9bbb-9543b4b09e90" Jan 17 00:04:17.771867 sshd[5279]: Invalid user pi from 185.246.128.171 port 40304 Jan 17 00:04:19.008164 containerd[1486]: time="2026-01-17T00:04:19.007821505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:04:19.281780 sshd[5279]: Disconnecting invalid user pi 185.246.128.171 port 40304: Change of username or service not allowed: (pi,ssh-connection) -> (azure,ssh-connection) [preauth] Jan 17 00:04:19.283665 systemd[1]: sshd@9-167.235.246.183:22-185.246.128.171:40304.service: Deactivated successfully. Jan 17 00:04:19.365818 containerd[1486]: time="2026-01-17T00:04:19.365602326Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:04:19.368956 containerd[1486]: time="2026-01-17T00:04:19.368771366Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:04:19.368956 containerd[1486]: time="2026-01-17T00:04:19.368880926Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:04:19.369921 kubelet[2574]: E0117 00:04:19.369468 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:04:19.369921 kubelet[2574]: E0117 00:04:19.369667 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:04:19.370954 kubelet[2574]: E0117 00:04:19.370649 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7d698fdbf4-vwrcc_calico-system(a5e03e55-071e-4370-bbe3-a19857cfbfbd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:04:19.370954 kubelet[2574]: E0117 00:04:19.370755 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d698fdbf4-vwrcc" podUID="a5e03e55-071e-4370-bbe3-a19857cfbfbd" Jan 17 00:04:20.059004 systemd[1]: Started sshd@10-167.235.246.183:22-185.246.128.171:58597.service - OpenSSH per-connection server daemon (185.246.128.171:58597). Jan 17 00:04:21.101747 sshd[5284]: Invalid user azure from 185.246.128.171 port 58597 Jan 17 00:04:21.305309 sshd[5284]: Disconnecting invalid user azure 185.246.128.171 port 58597: Change of username or service not allowed: (azure,ssh-connection) -> (onlime_r,ssh-connection) [preauth] Jan 17 00:04:21.308771 systemd[1]: sshd@10-167.235.246.183:22-185.246.128.171:58597.service: Deactivated successfully. Jan 17 00:04:21.787940 systemd[1]: Started sshd@11-167.235.246.183:22-185.246.128.171:43462.service - OpenSSH per-connection server daemon (185.246.128.171:43462). Jan 17 00:04:22.885972 sshd[5291]: Invalid user onlime_r from 185.246.128.171 port 43462 Jan 17 00:04:22.916409 sshd[5291]: Disconnecting invalid user onlime_r 185.246.128.171 port 43462: Change of username or service not allowed: (onlime_r,ssh-connection) -> (instrument,ssh-connection) [preauth] Jan 17 00:04:22.918961 systemd[1]: sshd@11-167.235.246.183:22-185.246.128.171:43462.service: Deactivated successfully. Jan 17 00:04:23.584528 systemd[1]: Started sshd@12-167.235.246.183:22-185.246.128.171:53845.service - OpenSSH per-connection server daemon (185.246.128.171:53845). Jan 17 00:04:24.007129 kubelet[2574]: E0117 00:04:24.007072 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798d7c56dc-ghv47" podUID="e2865d0a-d4d2-402d-89fc-69d90c7c76b9" Jan 17 00:04:24.009228 kubelet[2574]: E0117 00:04:24.009179 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9c4977545-g698v" podUID="1edd65d8-b5e4-447f-a4cd-2de7f77232a4" Jan 17 00:04:26.007899 kubelet[2574]: E0117 00:04:26.007845 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798d7c56dc-6ghq5" podUID="3a9d9fee-4b98-43fe-862d-a1e26e86f2ee" Jan 17 00:04:27.063121 sshd[5296]: Invalid user instrument from 185.246.128.171 port 53845 Jan 17 00:04:27.365576 sshd[5296]: Disconnecting invalid user instrument 185.246.128.171 port 53845: Change of username or service not allowed: (instrument,ssh-connection) -> (user,ssh-connection) [preauth] Jan 17 00:04:27.368208 systemd[1]: sshd@12-167.235.246.183:22-185.246.128.171:53845.service: Deactivated successfully. Jan 17 00:04:27.990974 systemd[1]: Started sshd@13-167.235.246.183:22-185.246.128.171:54196.service - OpenSSH per-connection server daemon (185.246.128.171:54196). Jan 17 00:04:28.401432 sshd[5301]: Invalid user user from 185.246.128.171 port 54196 Jan 17 00:04:30.007091 kubelet[2574]: E0117 00:04:30.006987 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rctkw" podUID="e730921e-fe6a-4325-b721-055844e798ac" Jan 17 00:04:30.071023 sshd[5301]: maximum authentication attempts exceeded for invalid user user from 185.246.128.171 port 54196 ssh2 [preauth] Jan 17 00:04:30.071023 sshd[5301]: Disconnecting invalid user user 185.246.128.171 port 54196: Too many authentication failures [preauth] Jan 17 00:04:30.072710 systemd[1]: sshd@13-167.235.246.183:22-185.246.128.171:54196.service: Deactivated successfully. Jan 17 00:04:31.007311 kubelet[2574]: E0117 00:04:31.006689 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-txw7d" podUID="362a3452-c30b-406b-9bbb-9543b4b09e90" Jan 17 00:04:31.308680 systemd[1]: Started sshd@14-167.235.246.183:22-185.246.128.171:55252.service - OpenSSH per-connection server daemon (185.246.128.171:55252). Jan 17 00:04:32.005969 kubelet[2574]: E0117 00:04:32.005456 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d698fdbf4-vwrcc" podUID="a5e03e55-071e-4370-bbe3-a19857cfbfbd" Jan 17 00:04:32.216727 sshd[5328]: Invalid user user from 185.246.128.171 port 55252 Jan 17 00:04:33.571074 sshd[5328]: Disconnecting invalid user user 185.246.128.171 port 55252: Change of username or service not allowed: (user,ssh-connection) -> (john,ssh-connection) [preauth] Jan 17 00:04:33.573895 systemd[1]: sshd@14-167.235.246.183:22-185.246.128.171:55252.service: Deactivated successfully. Jan 17 00:04:34.088906 systemd[1]: Started sshd@15-167.235.246.183:22-185.246.128.171:42643.service - OpenSSH per-connection server daemon (185.246.128.171:42643). Jan 17 00:04:34.925671 sshd[5333]: Invalid user john from 185.246.128.171 port 42643 Jan 17 00:04:35.222608 sshd[5333]: Disconnecting invalid user john 185.246.128.171 port 42643: Change of username or service not allowed: (john,ssh-connection) -> (openmediavault,ssh-connection) [preauth] Jan 17 00:04:35.223210 systemd[1]: sshd@15-167.235.246.183:22-185.246.128.171:42643.service: Deactivated successfully. Jan 17 00:04:35.736942 systemd[1]: Started sshd@16-167.235.246.183:22-185.246.128.171:2367.service - OpenSSH per-connection server daemon (185.246.128.171:2367). Jan 17 00:04:36.961649 sshd[5338]: Invalid user openmediavault from 185.246.128.171 port 2367 Jan 17 00:04:37.093780 sshd[5338]: Disconnecting invalid user openmediavault 185.246.128.171 port 2367: Change of username or service not allowed: (openmediavault,ssh-connection) -> (demo,ssh-connection) [preauth] Jan 17 00:04:37.094331 systemd[1]: sshd@16-167.235.246.183:22-185.246.128.171:2367.service: Deactivated successfully. Jan 17 00:04:37.586246 systemd[1]: Started sshd@17-167.235.246.183:22-185.246.128.171:40389.service - OpenSSH per-connection server daemon (185.246.128.171:40389). Jan 17 00:04:39.006530 kubelet[2574]: E0117 00:04:39.006462 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798d7c56dc-6ghq5" podUID="3a9d9fee-4b98-43fe-862d-a1e26e86f2ee" Jan 17 00:04:39.008569 kubelet[2574]: E0117 00:04:39.006965 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798d7c56dc-ghv47" podUID="e2865d0a-d4d2-402d-89fc-69d90c7c76b9" Jan 17 00:04:39.008569 kubelet[2574]: E0117 00:04:39.008234 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9c4977545-g698v" podUID="1edd65d8-b5e4-447f-a4cd-2de7f77232a4" Jan 17 00:04:39.145278 sshd[5343]: Invalid user demo from 185.246.128.171 port 40389 Jan 17 00:04:41.566667 sshd[5343]: Disconnecting invalid user demo 185.246.128.171 port 40389: Change of username or service not allowed: (demo,ssh-connection) -> (ftpuser,ssh-connection) [preauth] Jan 17 00:04:41.569708 systemd[1]: sshd@17-167.235.246.183:22-185.246.128.171:40389.service: Deactivated successfully. Jan 17 00:04:42.039742 systemd[1]: Started sshd@18-167.235.246.183:22-185.246.128.171:62708.service - OpenSSH per-connection server daemon (185.246.128.171:62708). Jan 17 00:04:43.011923 kubelet[2574]: E0117 00:04:43.011499 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-txw7d" podUID="362a3452-c30b-406b-9bbb-9543b4b09e90" Jan 17 00:04:43.616410 sshd[5348]: Invalid user ftpuser from 185.246.128.171 port 62708 Jan 17 00:04:44.638800 sshd[5348]: Disconnecting invalid user ftpuser 185.246.128.171 port 62708: Change of username or service not allowed: (ftpuser,ssh-connection) -> (user01,ssh-connection) [preauth] Jan 17 00:04:44.643178 systemd[1]: sshd@18-167.235.246.183:22-185.246.128.171:62708.service: Deactivated successfully. Jan 17 00:04:45.009745 kubelet[2574]: E0117 00:04:45.009562 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rctkw" podUID="e730921e-fe6a-4325-b721-055844e798ac" Jan 17 00:04:46.005620 kubelet[2574]: E0117 00:04:46.005545 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d698fdbf4-vwrcc" podUID="a5e03e55-071e-4370-bbe3-a19857cfbfbd" Jan 17 00:04:46.321781 systemd[1]: Started sshd@19-167.235.246.183:22-185.246.128.171:41834.service - OpenSSH per-connection server daemon (185.246.128.171:41834). Jan 17 00:04:48.575580 sshd[5355]: Invalid user user01 from 185.246.128.171 port 41834 Jan 17 00:04:49.241085 sshd[5355]: Disconnecting invalid user user01 185.246.128.171 port 41834: Change of username or service not allowed: (user01,ssh-connection) -> (syncthing,ssh-connection) [preauth] Jan 17 00:04:49.244098 systemd[1]: sshd@19-167.235.246.183:22-185.246.128.171:41834.service: Deactivated successfully. Jan 17 00:04:49.718835 systemd[1]: Started sshd@20-167.235.246.183:22-185.246.128.171:60326.service - OpenSSH per-connection server daemon (185.246.128.171:60326). Jan 17 00:04:50.006245 kubelet[2574]: E0117 00:04:50.006114 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9c4977545-g698v" podUID="1edd65d8-b5e4-447f-a4cd-2de7f77232a4" Jan 17 00:04:50.401292 sshd[5366]: Invalid user syncthing from 185.246.128.171 port 60326 Jan 17 00:04:50.561488 sshd[5366]: Disconnecting invalid user syncthing 185.246.128.171 port 60326: Change of username or service not allowed: (syncthing,ssh-connection) -> (mongod,ssh-connection) [preauth] Jan 17 00:04:50.565032 systemd[1]: sshd@20-167.235.246.183:22-185.246.128.171:60326.service: Deactivated successfully. Jan 17 00:04:50.939923 systemd[1]: Started sshd@21-167.235.246.183:22-185.246.128.171:1376.service - OpenSSH per-connection server daemon (185.246.128.171:1376). Jan 17 00:04:52.005700 kubelet[2574]: E0117 00:04:52.005628 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798d7c56dc-6ghq5" podUID="3a9d9fee-4b98-43fe-862d-a1e26e86f2ee" Jan 17 00:04:52.603306 sshd[5373]: Invalid user mongod from 185.246.128.171 port 1376 Jan 17 00:04:52.667690 sshd[5373]: Disconnecting invalid user mongod 185.246.128.171 port 1376: Change of username or service not allowed: (mongod,ssh-connection) -> (publicuser,ssh-connection) [preauth] Jan 17 00:04:52.669153 systemd[1]: sshd@21-167.235.246.183:22-185.246.128.171:1376.service: Deactivated successfully. Jan 17 00:04:53.849881 systemd[1]: Started sshd@22-167.235.246.183:22-185.246.128.171:46437.service - OpenSSH per-connection server daemon (185.246.128.171:46437). Jan 17 00:04:54.006424 containerd[1486]: time="2026-01-17T00:04:54.006288760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:04:54.351444 containerd[1486]: time="2026-01-17T00:04:54.351116991Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:04:54.352439 containerd[1486]: time="2026-01-17T00:04:54.352385311Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:04:54.353500 kubelet[2574]: E0117 00:04:54.352653 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:04:54.353500 kubelet[2574]: E0117 00:04:54.352697 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:04:54.353500 kubelet[2574]: E0117 00:04:54.352776 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-798d7c56dc-ghv47_calico-apiserver(e2865d0a-d4d2-402d-89fc-69d90c7c76b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:04:54.353500 kubelet[2574]: E0117 00:04:54.353036 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798d7c56dc-ghv47" podUID="e2865d0a-d4d2-402d-89fc-69d90c7c76b9" Jan 17 00:04:54.354392 containerd[1486]: time="2026-01-17T00:04:54.354337791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:04:55.006035 kubelet[2574]: E0117 00:04:55.005763 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-txw7d" podUID="362a3452-c30b-406b-9bbb-9543b4b09e90" Jan 17 00:04:57.011362 containerd[1486]: time="2026-01-17T00:04:57.011315244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:04:57.232746 sshd[5378]: Invalid user publicuser from 185.246.128.171 port 46437 Jan 17 00:04:57.346222 containerd[1486]: time="2026-01-17T00:04:57.346119876Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:04:57.348182 containerd[1486]: time="2026-01-17T00:04:57.348088796Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:04:57.348322 containerd[1486]: time="2026-01-17T00:04:57.348276756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:04:57.349007 kubelet[2574]: E0117 00:04:57.348625 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:04:57.349007 kubelet[2574]: E0117 00:04:57.348682 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:04:57.349007 kubelet[2574]: E0117 00:04:57.348769 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-rctkw_calico-system(e730921e-fe6a-4325-b721-055844e798ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:04:57.350671 containerd[1486]: time="2026-01-17T00:04:57.350394916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:04:57.654735 sshd[5378]: Disconnecting invalid user publicuser 185.246.128.171 port 46437: Change of username or service not allowed: (publicuser,ssh-connection) -> (aovalle,ssh-connection) [preauth] Jan 17 00:04:57.655581 systemd[1]: sshd@22-167.235.246.183:22-185.246.128.171:46437.service: Deactivated successfully. Jan 17 00:04:57.690864 containerd[1486]: time="2026-01-17T00:04:57.690634627Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:04:57.692605 containerd[1486]: time="2026-01-17T00:04:57.692075467Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:04:57.692605 containerd[1486]: time="2026-01-17T00:04:57.692198667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:04:57.693878 kubelet[2574]: E0117 00:04:57.693595 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:04:57.693878 kubelet[2574]: E0117 00:04:57.693654 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:04:57.693878 kubelet[2574]: E0117 00:04:57.693746 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-rctkw_calico-system(e730921e-fe6a-4325-b721-055844e798ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:04:57.694096 kubelet[2574]: E0117 00:04:57.693791 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rctkw" podUID="e730921e-fe6a-4325-b721-055844e798ac" Jan 17 00:04:58.613547 systemd[1]: Started sshd@23-167.235.246.183:22-185.246.128.171:56954.service - OpenSSH per-connection server daemon (185.246.128.171:56954). Jan 17 00:04:59.760407 sshd[5405]: Invalid user aovalle from 185.246.128.171 port 56954 Jan 17 00:05:00.649134 sshd[5405]: Disconnecting invalid user aovalle 185.246.128.171 port 56954: Change of username or service not allowed: (aovalle,ssh-connection) -> (teste,ssh-connection) [preauth] Jan 17 00:05:00.652356 systemd[1]: sshd@23-167.235.246.183:22-185.246.128.171:56954.service: Deactivated successfully. Jan 17 00:05:01.008652 containerd[1486]: time="2026-01-17T00:05:01.008111749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:05:01.372597 containerd[1486]: time="2026-01-17T00:05:01.372391580Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:05:01.374208 containerd[1486]: time="2026-01-17T00:05:01.374050540Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:05:01.374208 containerd[1486]: time="2026-01-17T00:05:01.374172260Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:05:01.375343 kubelet[2574]: E0117 00:05:01.374528 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:05:01.375343 kubelet[2574]: E0117 00:05:01.374581 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:05:01.375343 kubelet[2574]: E0117 00:05:01.374663 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7d698fdbf4-vwrcc_calico-system(a5e03e55-071e-4370-bbe3-a19857cfbfbd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:05:01.375343 kubelet[2574]: E0117 00:05:01.374695 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d698fdbf4-vwrcc" podUID="a5e03e55-071e-4370-bbe3-a19857cfbfbd" Jan 17 00:05:01.506593 systemd[1]: Started sshd@24-167.235.246.183:22-185.246.128.171:52795.service - OpenSSH per-connection server daemon (185.246.128.171:52795). Jan 17 00:05:02.521879 sshd[5424]: Invalid user teste from 185.246.128.171 port 52795 Jan 17 00:05:02.854600 sshd[5424]: Disconnecting invalid user teste 185.246.128.171 port 52795: Change of username or service not allowed: (teste,ssh-connection) -> (ftp1,ssh-connection) [preauth] Jan 17 00:05:02.857375 systemd[1]: sshd@24-167.235.246.183:22-185.246.128.171:52795.service: Deactivated successfully. Jan 17 00:05:03.795307 systemd[1]: Started sshd@25-167.235.246.183:22-185.246.128.171:56654.service - OpenSSH per-connection server daemon (185.246.128.171:56654). Jan 17 00:05:04.004849 containerd[1486]: time="2026-01-17T00:05:04.004761521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:05:04.347677 containerd[1486]: time="2026-01-17T00:05:04.347047113Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:05:04.348870 containerd[1486]: time="2026-01-17T00:05:04.348778353Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:05:04.348981 containerd[1486]: time="2026-01-17T00:05:04.348951033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:05:04.349523 kubelet[2574]: E0117 00:05:04.349237 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:05:04.349523 kubelet[2574]: E0117 00:05:04.349288 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:05:04.349523 kubelet[2574]: E0117 00:05:04.349369 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-9c4977545-g698v_calico-system(1edd65d8-b5e4-447f-a4cd-2de7f77232a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:05:04.351326 containerd[1486]: time="2026-01-17T00:05:04.350550913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:05:04.735212 containerd[1486]: time="2026-01-17T00:05:04.734700065Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:05:04.737040 containerd[1486]: time="2026-01-17T00:05:04.736869385Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:05:04.737040 containerd[1486]: time="2026-01-17T00:05:04.736940585Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:05:04.737320 kubelet[2574]: E0117 00:05:04.737200 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:05:04.737320 kubelet[2574]: E0117 00:05:04.737279 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:05:04.737999 kubelet[2574]: E0117 00:05:04.737372 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-9c4977545-g698v_calico-system(1edd65d8-b5e4-447f-a4cd-2de7f77232a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:05:04.737999 kubelet[2574]: E0117 00:05:04.737466 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9c4977545-g698v" podUID="1edd65d8-b5e4-447f-a4cd-2de7f77232a4" Jan 17 00:05:05.248982 sshd[5437]: Invalid user ftp1 from 185.246.128.171 port 56654 Jan 17 00:05:05.750603 sshd[5437]: Disconnecting invalid user ftp1 185.246.128.171 port 56654: Change of username or service not allowed: (ftp1,ssh-connection) -> (elasticsearch,ssh-connection) [preauth] Jan 17 00:05:05.753936 systemd[1]: sshd@25-167.235.246.183:22-185.246.128.171:56654.service: Deactivated successfully. Jan 17 00:05:06.006464 containerd[1486]: time="2026-01-17T00:05:06.006104197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:05:06.349640 containerd[1486]: time="2026-01-17T00:05:06.349410550Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:05:06.351287 containerd[1486]: time="2026-01-17T00:05:06.351104710Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:05:06.351287 containerd[1486]: time="2026-01-17T00:05:06.351172110Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:05:06.351429 kubelet[2574]: E0117 00:05:06.351385 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:05:06.351741 kubelet[2574]: E0117 00:05:06.351437 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:05:06.351741 kubelet[2574]: E0117 00:05:06.351526 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-798d7c56dc-6ghq5_calico-apiserver(3a9d9fee-4b98-43fe-862d-a1e26e86f2ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:05:06.351741 kubelet[2574]: E0117 00:05:06.351557 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798d7c56dc-6ghq5" podUID="3a9d9fee-4b98-43fe-862d-a1e26e86f2ee" Jan 17 00:05:06.554980 systemd[1]: Started sshd@26-167.235.246.183:22-185.246.128.171:45588.service - OpenSSH per-connection server daemon (185.246.128.171:45588). Jan 17 00:05:07.712600 sshd[5442]: Invalid user elasticsearch from 185.246.128.171 port 45588 Jan 17 00:05:07.800545 sshd[5442]: Disconnecting invalid user elasticsearch 185.246.128.171 port 45588: Change of username or service not allowed: (elasticsearch,ssh-connection) -> (sftp,ssh-connection) [preauth] Jan 17 00:05:07.802601 systemd[1]: sshd@26-167.235.246.183:22-185.246.128.171:45588.service: Deactivated successfully. Jan 17 00:05:07.982880 systemd[1]: Started sshd@27-167.235.246.183:22-185.246.128.171:48118.service - OpenSSH per-connection server daemon (185.246.128.171:48118). Jan 17 00:05:08.819845 sshd[5447]: Invalid user sftp from 185.246.128.171 port 48118 Jan 17 00:05:09.008700 kubelet[2574]: E0117 00:05:09.008652 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798d7c56dc-ghv47" podUID="e2865d0a-d4d2-402d-89fc-69d90c7c76b9" Jan 17 00:05:09.010872 containerd[1486]: time="2026-01-17T00:05:09.010206894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:05:09.262587 sshd[5447]: Disconnecting invalid user sftp 185.246.128.171 port 48118: Change of username or service not allowed: (sftp,ssh-connection) -> (tunnel,ssh-connection) [preauth] Jan 17 00:05:09.268337 systemd[1]: sshd@27-167.235.246.183:22-185.246.128.171:48118.service: Deactivated successfully. Jan 17 00:05:09.350238 containerd[1486]: time="2026-01-17T00:05:09.350184527Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:05:09.351503 containerd[1486]: time="2026-01-17T00:05:09.351388407Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:05:09.351503 containerd[1486]: time="2026-01-17T00:05:09.351471207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:05:09.352842 kubelet[2574]: E0117 00:05:09.352766 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:05:09.352842 kubelet[2574]: E0117 00:05:09.352847 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:05:09.353025 kubelet[2574]: E0117 00:05:09.352936 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-txw7d_calico-system(362a3452-c30b-406b-9bbb-9543b4b09e90): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:05:09.353025 kubelet[2574]: E0117 00:05:09.352975 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-txw7d" podUID="362a3452-c30b-406b-9bbb-9543b4b09e90" Jan 17 00:05:10.988797 systemd[1]: Started sshd@28-167.235.246.183:22-185.246.128.171:23102.service - OpenSSH per-connection server daemon (185.246.128.171:23102). Jan 17 00:05:11.588763 sshd[5452]: Invalid user tunnel from 185.246.128.171 port 23102 Jan 17 00:05:11.922580 sshd[5452]: Disconnecting invalid user tunnel 185.246.128.171 port 23102: Change of username or service not allowed: (tunnel,ssh-connection) -> (tester,ssh-connection) [preauth] Jan 17 00:05:11.928236 systemd[1]: sshd@28-167.235.246.183:22-185.246.128.171:23102.service: Deactivated successfully. Jan 17 00:05:12.327434 systemd[1]: Started sshd@29-167.235.246.183:22-185.246.128.171:44947.service - OpenSSH per-connection server daemon (185.246.128.171:44947). Jan 17 00:05:13.010449 kubelet[2574]: E0117 00:05:13.007270 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d698fdbf4-vwrcc" podUID="a5e03e55-071e-4370-bbe3-a19857cfbfbd" Jan 17 00:05:13.010449 kubelet[2574]: E0117 00:05:13.009406 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rctkw" podUID="e730921e-fe6a-4325-b721-055844e798ac" Jan 17 00:05:13.209656 sshd[5457]: Invalid user tester from 185.246.128.171 port 44947 Jan 17 00:05:13.450639 sshd[5457]: Disconnecting invalid user tester 185.246.128.171 port 44947: Change of username or service not allowed: (tester,ssh-connection) -> (log,ssh-connection) [preauth] Jan 17 00:05:13.454420 systemd[1]: sshd@29-167.235.246.183:22-185.246.128.171:44947.service: Deactivated successfully. Jan 17 00:05:14.172868 systemd[1]: Started sshd@30-167.235.246.183:22-185.246.128.171:46581.service - OpenSSH per-connection server daemon (185.246.128.171:46581). Jan 17 00:05:14.754100 sshd[5462]: Invalid user log from 185.246.128.171 port 46581 Jan 17 00:05:15.001531 sshd[5462]: Disconnecting invalid user log 185.246.128.171 port 46581: Change of username or service not allowed: (log,ssh-connection) -> (nc,ssh-connection) [preauth] Jan 17 00:05:15.003848 systemd[1]: sshd@30-167.235.246.183:22-185.246.128.171:46581.service: Deactivated successfully. Jan 17 00:05:15.172308 systemd[1]: Started sshd@31-167.235.246.183:22-4.153.228.146:46598.service - OpenSSH per-connection server daemon (4.153.228.146:46598). Jan 17 00:05:15.769986 sshd[5470]: Accepted publickey for core from 4.153.228.146 port 46598 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:05:15.772607 sshd[5470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:15.779603 systemd-logind[1463]: New session 8 of user core. Jan 17 00:05:15.787239 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:05:15.873340 systemd[1]: Started sshd@32-167.235.246.183:22-185.246.128.171:13066.service - OpenSSH per-connection server daemon (185.246.128.171:13066). Jan 17 00:05:16.006854 kubelet[2574]: E0117 00:05:16.006758 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9c4977545-g698v" podUID="1edd65d8-b5e4-447f-a4cd-2de7f77232a4" Jan 17 00:05:16.324169 sshd[5470]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:16.331040 systemd[1]: sshd@31-167.235.246.183:22-4.153.228.146:46598.service: Deactivated successfully. Jan 17 00:05:16.334325 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:05:16.337187 systemd-logind[1463]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:05:16.339433 systemd-logind[1463]: Removed session 8. Jan 17 00:05:16.987322 sshd[5474]: Invalid user nc from 185.246.128.171 port 13066 Jan 17 00:05:17.475911 sshd[5474]: Disconnecting invalid user nc 185.246.128.171 port 13066: Change of username or service not allowed: (nc,ssh-connection) -> (monitor,ssh-connection) [preauth] Jan 17 00:05:17.478616 systemd[1]: sshd@32-167.235.246.183:22-185.246.128.171:13066.service: Deactivated successfully. Jan 17 00:05:19.005970 kubelet[2574]: E0117 00:05:19.005907 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798d7c56dc-6ghq5" podUID="3a9d9fee-4b98-43fe-862d-a1e26e86f2ee" Jan 17 00:05:19.317606 systemd[1]: Started sshd@33-167.235.246.183:22-185.246.128.171:56237.service - OpenSSH per-connection server daemon (185.246.128.171:56237). Jan 17 00:05:20.834732 sshd[5489]: Invalid user monitor from 185.246.128.171 port 56237 Jan 17 00:05:20.962254 sshd[5489]: Disconnecting invalid user monitor 185.246.128.171 port 56237: Change of username or service not allowed: (monitor,ssh-connection) -> (user1,ssh-connection) [preauth] Jan 17 00:05:20.964002 systemd[1]: sshd@33-167.235.246.183:22-185.246.128.171:56237.service: Deactivated successfully. Jan 17 00:05:21.011297 kubelet[2574]: E0117 00:05:21.010638 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-txw7d" podUID="362a3452-c30b-406b-9bbb-9543b4b09e90" Jan 17 00:05:21.443986 systemd[1]: Started sshd@34-167.235.246.183:22-4.153.228.146:46606.service - OpenSSH per-connection server daemon (4.153.228.146:46606). Jan 17 00:05:21.535912 systemd[1]: Started sshd@35-167.235.246.183:22-185.246.128.171:43092.service - OpenSSH per-connection server daemon (185.246.128.171:43092). Jan 17 00:05:22.061332 sshd[5496]: Accepted publickey for core from 4.153.228.146 port 46606 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:05:22.063808 sshd[5496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:22.073132 systemd-logind[1463]: New session 9 of user core. Jan 17 00:05:22.081722 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:05:22.520643 sshd[5499]: Invalid user user1 from 185.246.128.171 port 43092 Jan 17 00:05:22.609614 sshd[5496]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:22.615633 systemd-logind[1463]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:05:22.616735 systemd[1]: sshd@34-167.235.246.183:22-4.153.228.146:46606.service: Deactivated successfully. Jan 17 00:05:22.619688 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:05:22.622839 systemd-logind[1463]: Removed session 9. Jan 17 00:05:23.173429 sshd[5499]: Disconnecting invalid user user1 185.246.128.171 port 43092: Change of username or service not allowed: (user1,ssh-connection) -> (Test,ssh-connection) [preauth] Jan 17 00:05:23.174569 systemd[1]: sshd@35-167.235.246.183:22-185.246.128.171:43092.service: Deactivated successfully. Jan 17 00:05:23.738927 systemd[1]: Started sshd@36-167.235.246.183:22-185.246.128.171:60023.service - OpenSSH per-connection server daemon (185.246.128.171:60023). Jan 17 00:05:24.005881 kubelet[2574]: E0117 00:05:24.005094 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798d7c56dc-ghv47" podUID="e2865d0a-d4d2-402d-89fc-69d90c7c76b9" Jan 17 00:05:24.007692 kubelet[2574]: E0117 00:05:24.007554 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rctkw" podUID="e730921e-fe6a-4325-b721-055844e798ac" Jan 17 00:05:24.779321 sshd[5516]: Invalid user Test from 185.246.128.171 port 60023 Jan 17 00:05:25.005272 kubelet[2574]: E0117 00:05:25.004658 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d698fdbf4-vwrcc" podUID="a5e03e55-071e-4370-bbe3-a19857cfbfbd" Jan 17 00:05:26.770751 sshd[5516]: Disconnecting invalid user Test 185.246.128.171 port 60023: Change of username or service not allowed: (Test,ssh-connection) -> (helpdesk,ssh-connection) [preauth] Jan 17 00:05:26.775737 systemd[1]: sshd@36-167.235.246.183:22-185.246.128.171:60023.service: Deactivated successfully. Jan 17 00:05:27.732051 systemd[1]: Started sshd@37-167.235.246.183:22-4.153.228.146:45000.service - OpenSSH per-connection server daemon (4.153.228.146:45000). Jan 17 00:05:27.740839 systemd[1]: Started sshd@38-167.235.246.183:22-185.246.128.171:48018.service - OpenSSH per-connection server daemon (185.246.128.171:48018). Jan 17 00:05:28.322420 systemd[1]: run-containerd-runc-k8s.io-82ba6b896debd124731a3ce7e70b90d937b6b1b308e740825f7c957618dce863-runc.uldHvO.mount: Deactivated successfully. Jan 17 00:05:28.355096 sshd[5522]: Accepted publickey for core from 4.153.228.146 port 45000 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:05:28.357886 sshd[5522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:28.364907 systemd-logind[1463]: New session 10 of user core. Jan 17 00:05:28.369786 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:05:28.886321 sshd[5522]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:28.892221 systemd[1]: sshd@37-167.235.246.183:22-4.153.228.146:45000.service: Deactivated successfully. Jan 17 00:05:28.898754 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:05:28.900951 systemd-logind[1463]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:05:28.903049 systemd-logind[1463]: Removed session 10. Jan 17 00:05:29.014905 systemd[1]: Started sshd@39-167.235.246.183:22-4.153.228.146:45012.service - OpenSSH per-connection server daemon (4.153.228.146:45012). Jan 17 00:05:29.040488 sshd[5523]: Invalid user helpdesk from 185.246.128.171 port 48018 Jan 17 00:05:29.384914 sshd[5523]: Disconnecting invalid user helpdesk 185.246.128.171 port 48018: Change of username or service not allowed: (helpdesk,ssh-connection) -> (github,ssh-connection) [preauth] Jan 17 00:05:29.388054 systemd[1]: sshd@38-167.235.246.183:22-185.246.128.171:48018.service: Deactivated successfully. Jan 17 00:05:29.669679 sshd[5561]: Accepted publickey for core from 4.153.228.146 port 45012 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:05:29.670846 sshd[5561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:29.681835 systemd-logind[1463]: New session 11 of user core. Jan 17 00:05:29.687832 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:05:29.926948 systemd[1]: Started sshd@40-167.235.246.183:22-185.246.128.171:43431.service - OpenSSH per-connection server daemon (185.246.128.171:43431). Jan 17 00:05:30.285161 sshd[5561]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:30.291791 systemd[1]: sshd@39-167.235.246.183:22-4.153.228.146:45012.service: Deactivated successfully. Jan 17 00:05:30.295186 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:05:30.297057 systemd-logind[1463]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:05:30.298669 systemd-logind[1463]: Removed session 11. Jan 17 00:05:30.405013 systemd[1]: Started sshd@41-167.235.246.183:22-4.153.228.146:45028.service - OpenSSH per-connection server daemon (4.153.228.146:45028). Jan 17 00:05:31.010292 kubelet[2574]: E0117 00:05:31.010037 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9c4977545-g698v" podUID="1edd65d8-b5e4-447f-a4cd-2de7f77232a4" Jan 17 00:05:31.037548 sshd[5577]: Accepted publickey for core from 4.153.228.146 port 45028 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:05:31.039374 sshd[5577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:31.044449 systemd-logind[1463]: New session 12 of user core. Jan 17 00:05:31.052856 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:05:31.188834 sshd[5567]: Invalid user github from 185.246.128.171 port 43431 Jan 17 00:05:31.218854 systemd[1]: Started sshd@42-167.235.246.183:22-119.18.52.5:35094.service - OpenSSH per-connection server daemon (119.18.52.5:35094). Jan 17 00:05:31.344666 sshd[5567]: Disconnecting invalid user github 185.246.128.171 port 43431: Change of username or service not allowed: (github,ssh-connection) -> (tmax,ssh-connection) [preauth] Jan 17 00:05:31.347610 systemd[1]: sshd@40-167.235.246.183:22-185.246.128.171:43431.service: Deactivated successfully. Jan 17 00:05:31.554902 systemd[1]: Started sshd@43-167.235.246.183:22-185.156.73.233:62008.service - OpenSSH per-connection server daemon (185.156.73.233:62008). Jan 17 00:05:31.587452 sshd[5577]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:31.593446 systemd[1]: sshd@41-167.235.246.183:22-4.153.228.146:45028.service: Deactivated successfully. Jan 17 00:05:31.597975 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:05:31.600929 systemd-logind[1463]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:05:31.602404 systemd-logind[1463]: Removed session 12. Jan 17 00:05:31.683365 systemd[1]: Started sshd@44-167.235.246.183:22-185.246.128.171:43475.service - OpenSSH per-connection server daemon (185.246.128.171:43475). Jan 17 00:05:32.605144 sshd[5597]: Invalid user tmax from 185.246.128.171 port 43475 Jan 17 00:05:32.665564 sshd[5597]: Disconnecting invalid user tmax 185.246.128.171 port 43475: Change of username or service not allowed: (tmax,ssh-connection) -> (api,ssh-connection) [preauth] Jan 17 00:05:32.668277 systemd[1]: sshd@44-167.235.246.183:22-185.246.128.171:43475.service: Deactivated successfully. Jan 17 00:05:33.287903 systemd[1]: Started sshd@45-167.235.246.183:22-185.246.128.171:61730.service - OpenSSH per-connection server daemon (185.246.128.171:61730). Jan 17 00:05:33.494094 sshd[5581]: Received disconnect from 119.18.52.5 port 35094:11: Bye Bye [preauth] Jan 17 00:05:33.494550 sshd[5581]: Disconnected from authenticating user root 119.18.52.5 port 35094 [preauth] Jan 17 00:05:33.497635 systemd[1]: sshd@42-167.235.246.183:22-119.18.52.5:35094.service: Deactivated successfully. Jan 17 00:05:34.005407 kubelet[2574]: E0117 00:05:34.005359 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798d7c56dc-6ghq5" podUID="3a9d9fee-4b98-43fe-862d-a1e26e86f2ee" Jan 17 00:05:34.629143 sshd[5602]: Invalid user api from 185.246.128.171 port 61730 Jan 17 00:05:34.682914 sshd[5602]: Disconnecting invalid user api 185.246.128.171 port 61730: Change of username or service not allowed: (api,ssh-connection) -> (theta,ssh-connection) [preauth] Jan 17 00:05:34.686642 systemd[1]: sshd@45-167.235.246.183:22-185.246.128.171:61730.service: Deactivated successfully. Jan 17 00:05:35.313938 systemd[1]: Started sshd@46-167.235.246.183:22-185.246.128.171:39769.service - OpenSSH per-connection server daemon (185.246.128.171:39769). Jan 17 00:05:35.825170 sshd[5614]: Invalid user theta from 185.246.128.171 port 39769 Jan 17 00:05:36.007336 kubelet[2574]: E0117 00:05:36.007250 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-txw7d" podUID="362a3452-c30b-406b-9bbb-9543b4b09e90" Jan 17 00:05:36.009449 kubelet[2574]: E0117 00:05:36.009398 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rctkw" podUID="e730921e-fe6a-4325-b721-055844e798ac" Jan 17 00:05:36.156703 sshd[5614]: Disconnecting invalid user theta 185.246.128.171 port 39769: Change of username or service not allowed: (theta,ssh-connection) -> (minima,ssh-connection) [preauth] Jan 17 00:05:36.160743 systemd[1]: sshd@46-167.235.246.183:22-185.246.128.171:39769.service: Deactivated successfully. Jan 17 00:05:36.268871 systemd[1]: Started sshd@47-167.235.246.183:22-185.246.128.171:5865.service - OpenSSH per-connection server daemon (185.246.128.171:5865). Jan 17 00:05:36.699467 systemd[1]: Started sshd@48-167.235.246.183:22-4.153.228.146:46708.service - OpenSSH per-connection server daemon (4.153.228.146:46708). Jan 17 00:05:36.988450 sshd[5593]: Connection closed by authenticating user root 185.156.73.233 port 62008 [preauth] Jan 17 00:05:36.991698 systemd[1]: sshd@43-167.235.246.183:22-185.156.73.233:62008.service: Deactivated successfully. Jan 17 00:05:37.007447 kubelet[2574]: E0117 00:05:37.007010 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d698fdbf4-vwrcc" podUID="a5e03e55-071e-4370-bbe3-a19857cfbfbd" Jan 17 00:05:37.090041 sshd[5619]: Invalid user minima from 185.246.128.171 port 5865 Jan 17 00:05:37.318986 sshd[5622]: Accepted publickey for core from 4.153.228.146 port 46708 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:05:37.320555 sshd[5622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:37.327410 systemd-logind[1463]: New session 13 of user core. Jan 17 00:05:37.334702 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:05:37.838109 sshd[5622]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:37.842682 systemd-logind[1463]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:05:37.843413 systemd[1]: sshd@48-167.235.246.183:22-4.153.228.146:46708.service: Deactivated successfully. Jan 17 00:05:37.849354 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:05:37.850923 systemd-logind[1463]: Removed session 13. Jan 17 00:05:38.004706 kubelet[2574]: E0117 00:05:38.004235 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798d7c56dc-ghv47" podUID="e2865d0a-d4d2-402d-89fc-69d90c7c76b9" Jan 17 00:05:38.309433 sshd[5619]: Disconnecting invalid user minima 185.246.128.171 port 5865: Change of username or service not allowed: (minima,ssh-connection) -> (orangepi,ssh-connection) [preauth] Jan 17 00:05:38.313275 systemd[1]: sshd@47-167.235.246.183:22-185.246.128.171:5865.service: Deactivated successfully. Jan 17 00:05:38.845853 systemd[1]: Started sshd@49-167.235.246.183:22-185.246.128.171:50785.service - OpenSSH per-connection server daemon (185.246.128.171:50785). Jan 17 00:05:40.042991 sshd[5639]: Invalid user orangepi from 185.246.128.171 port 50785 Jan 17 00:05:40.125092 sshd[5639]: Disconnecting invalid user orangepi 185.246.128.171 port 50785: Change of username or service not allowed: (orangepi,ssh-connection) -> (anonymous,ssh-connection) [preauth] Jan 17 00:05:40.129003 systemd[1]: sshd@49-167.235.246.183:22-185.246.128.171:50785.service: Deactivated successfully. Jan 17 00:05:42.236071 systemd[1]: Started sshd@50-167.235.246.183:22-185.246.128.171:52899.service - OpenSSH per-connection server daemon (185.246.128.171:52899). Jan 17 00:05:42.956727 systemd[1]: Started sshd@51-167.235.246.183:22-4.153.228.146:46710.service - OpenSSH per-connection server daemon (4.153.228.146:46710). Jan 17 00:05:43.009252 kubelet[2574]: E0117 00:05:43.009182 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9c4977545-g698v" podUID="1edd65d8-b5e4-447f-a4cd-2de7f77232a4" Jan 17 00:05:43.598727 sshd[5646]: Accepted publickey for core from 4.153.228.146 port 46710 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:05:43.600635 sshd[5646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:43.608291 systemd-logind[1463]: New session 14 of user core. Jan 17 00:05:43.614987 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:05:43.958366 sshd[5644]: Invalid user anonymous from 185.246.128.171 port 52899 Jan 17 00:05:44.137731 sshd[5646]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:44.141478 systemd[1]: sshd@51-167.235.246.183:22-4.153.228.146:46710.service: Deactivated successfully. Jan 17 00:05:44.147936 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:05:44.150691 systemd-logind[1463]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:05:44.152604 systemd-logind[1463]: Removed session 14. Jan 17 00:05:45.194282 sshd[5644]: maximum authentication attempts exceeded for invalid user anonymous from 185.246.128.171 port 52899 ssh2 [preauth] Jan 17 00:05:45.194282 sshd[5644]: Disconnecting invalid user anonymous 185.246.128.171 port 52899: Too many authentication failures [preauth] Jan 17 00:05:45.196924 systemd[1]: sshd@50-167.235.246.183:22-185.246.128.171:52899.service: Deactivated successfully. Jan 17 00:05:45.400059 systemd[1]: Started sshd@52-167.235.246.183:22-185.246.128.171:54336.service - OpenSSH per-connection server daemon (185.246.128.171:54336). Jan 17 00:05:46.336687 sshd[5664]: Invalid user anonymous from 185.246.128.171 port 54336 Jan 17 00:05:46.831147 sshd[5664]: Disconnecting invalid user anonymous 185.246.128.171 port 54336: Change of username or service not allowed: (anonymous,ssh-connection) -> (administrator,ssh-connecti [preauth] Jan 17 00:05:46.832393 systemd[1]: sshd@52-167.235.246.183:22-185.246.128.171:54336.service: Deactivated successfully. Jan 17 00:05:46.908985 systemd[1]: Started sshd@53-167.235.246.183:22-185.246.128.171:22632.service - OpenSSH per-connection server daemon (185.246.128.171:22632). Jan 17 00:05:48.004435 kubelet[2574]: E0117 00:05:48.004357 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-txw7d" podUID="362a3452-c30b-406b-9bbb-9543b4b09e90" Jan 17 00:05:48.431236 sshd[5669]: Invalid user administrator from 185.246.128.171 port 22632 Jan 17 00:05:48.479518 sshd[5669]: Disconnecting invalid user administrator 185.246.128.171 port 22632: Change of username or service not allowed: (administrator,ssh-connection) -> (user3,ssh-connection) [preauth] Jan 17 00:05:48.482699 systemd[1]: sshd@53-167.235.246.183:22-185.246.128.171:22632.service: Deactivated successfully. Jan 17 00:05:48.630007 systemd[1]: Started sshd@54-167.235.246.183:22-185.246.128.171:40784.service - OpenSSH per-connection server daemon (185.246.128.171:40784). Jan 17 00:05:49.005874 kubelet[2574]: E0117 00:05:49.005387 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798d7c56dc-6ghq5" podUID="3a9d9fee-4b98-43fe-862d-a1e26e86f2ee" Jan 17 00:05:49.259478 systemd[1]: Started sshd@55-167.235.246.183:22-4.153.228.146:50146.service - OpenSSH per-connection server daemon (4.153.228.146:50146). Jan 17 00:05:49.903879 sshd[5677]: Accepted publickey for core from 4.153.228.146 port 50146 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:05:49.907675 sshd[5677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:49.914359 systemd-logind[1463]: New session 15 of user core. Jan 17 00:05:49.918724 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:05:50.006560 kubelet[2574]: E0117 00:05:50.006187 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rctkw" podUID="e730921e-fe6a-4325-b721-055844e798ac" Jan 17 00:05:50.019442 sshd[5674]: Invalid user user3 from 185.246.128.171 port 40784 Jan 17 00:05:50.157194 sshd[5674]: Disconnecting invalid user user3 185.246.128.171 port 40784: Change of username or service not allowed: (user3,ssh-connection) -> (devops,ssh-connection) [preauth] Jan 17 00:05:50.161360 systemd[1]: sshd@54-167.235.246.183:22-185.246.128.171:40784.service: Deactivated successfully. Jan 17 00:05:50.443784 sshd[5677]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:50.449774 systemd[1]: sshd@55-167.235.246.183:22-4.153.228.146:50146.service: Deactivated successfully. Jan 17 00:05:50.454685 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:05:50.459755 systemd-logind[1463]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:05:50.461555 systemd-logind[1463]: Removed session 15. Jan 17 00:05:50.553396 systemd[1]: Started sshd@56-167.235.246.183:22-4.153.228.146:50158.service - OpenSSH per-connection server daemon (4.153.228.146:50158). Jan 17 00:05:50.996848 systemd[1]: Started sshd@57-167.235.246.183:22-185.246.128.171:32843.service - OpenSSH per-connection server daemon (185.246.128.171:32843). Jan 17 00:05:51.005320 kubelet[2574]: E0117 00:05:51.005264 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798d7c56dc-ghv47" podUID="e2865d0a-d4d2-402d-89fc-69d90c7c76b9" Jan 17 00:05:51.008159 kubelet[2574]: E0117 00:05:51.008082 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d698fdbf4-vwrcc" podUID="a5e03e55-071e-4370-bbe3-a19857cfbfbd" Jan 17 00:05:51.179883 sshd[5691]: Accepted publickey for core from 4.153.228.146 port 50158 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:05:51.182048 sshd[5691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:51.188587 systemd-logind[1463]: New session 16 of user core. Jan 17 00:05:51.194800 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:05:51.868402 sshd[5691]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:51.872865 systemd[1]: sshd@56-167.235.246.183:22-4.153.228.146:50158.service: Deactivated successfully. Jan 17 00:05:51.872902 systemd-logind[1463]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:05:51.877207 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:05:51.879981 systemd-logind[1463]: Removed session 16. Jan 17 00:05:51.981367 systemd[1]: Started sshd@58-167.235.246.183:22-4.153.228.146:50160.service - OpenSSH per-connection server daemon (4.153.228.146:50160). Jan 17 00:05:52.208615 sshd[5696]: Invalid user devops from 185.246.128.171 port 32843 Jan 17 00:05:52.583359 sshd[5708]: Accepted publickey for core from 4.153.228.146 port 50160 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:05:52.586051 sshd[5708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:52.594715 systemd-logind[1463]: New session 17 of user core. Jan 17 00:05:52.597076 sshd[5696]: Disconnecting invalid user devops 185.246.128.171 port 32843: Change of username or service not allowed: (devops,ssh-connection) -> (,ssh-connection) [preauth] Jan 17 00:05:52.599703 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:05:52.605140 systemd[1]: sshd@57-167.235.246.183:22-185.246.128.171:32843.service: Deactivated successfully. Jan 17 00:05:53.346195 systemd[1]: Started sshd@59-167.235.246.183:22-185.246.128.171:63940.service - OpenSSH per-connection server daemon (185.246.128.171:63940). Jan 17 00:05:53.860794 sshd[5708]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:53.868607 systemd[1]: sshd@58-167.235.246.183:22-4.153.228.146:50160.service: Deactivated successfully. Jan 17 00:05:53.873664 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:05:53.876654 systemd-logind[1463]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:05:53.879637 systemd-logind[1463]: Removed session 17. Jan 17 00:05:53.973875 systemd[1]: Started sshd@60-167.235.246.183:22-4.153.228.146:50164.service - OpenSSH per-connection server daemon (4.153.228.146:50164). Jan 17 00:05:54.429270 sshd[5721]: Invalid user from 185.246.128.171 port 63940 Jan 17 00:05:54.599088 sshd[5732]: Accepted publickey for core from 4.153.228.146 port 50164 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:05:54.603430 sshd[5732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:54.611452 systemd-logind[1463]: New session 18 of user core. Jan 17 00:05:54.617713 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:05:55.268687 sshd[5732]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:55.273925 systemd[1]: sshd@60-167.235.246.183:22-4.153.228.146:50164.service: Deactivated successfully. Jan 17 00:05:55.278174 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:05:55.279758 systemd-logind[1463]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:05:55.280814 systemd-logind[1463]: Removed session 18. Jan 17 00:05:55.386178 systemd[1]: Started sshd@61-167.235.246.183:22-4.153.228.146:43672.service - OpenSSH per-connection server daemon (4.153.228.146:43672). Jan 17 00:05:56.013896 sshd[5745]: Accepted publickey for core from 4.153.228.146 port 43672 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:05:56.016620 sshd[5745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:56.026302 systemd-logind[1463]: New session 19 of user core. Jan 17 00:05:56.031780 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:05:56.547671 sshd[5745]: pam_unix(sshd:session): session closed for user core Jan 17 00:05:56.553205 systemd[1]: sshd@61-167.235.246.183:22-4.153.228.146:43672.service: Deactivated successfully. Jan 17 00:05:56.557878 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:05:56.559885 systemd-logind[1463]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:05:56.561773 systemd-logind[1463]: Removed session 19. Jan 17 00:05:56.944023 sshd[5721]: Disconnecting invalid user 185.246.128.171 port 63940: Change of username or service not allowed: (,ssh-connection) -> (note,ssh-connection) [preauth] Jan 17 00:05:56.946436 systemd[1]: sshd@59-167.235.246.183:22-185.246.128.171:63940.service: Deactivated successfully. Jan 17 00:05:57.008885 kubelet[2574]: E0117 00:05:57.008432 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9c4977545-g698v" podUID="1edd65d8-b5e4-447f-a4cd-2de7f77232a4" Jan 17 00:05:57.976986 systemd[1]: Started sshd@62-167.235.246.183:22-185.246.128.171:39519.service - OpenSSH per-connection server daemon (185.246.128.171:39519). Jan 17 00:05:58.931480 sshd[5760]: Invalid user note from 185.246.128.171 port 39519 Jan 17 00:05:59.336376 sshd[5760]: Disconnecting invalid user note 185.246.128.171 port 39519: Change of username or service not allowed: (note,ssh-connection) -> (dspace,ssh-connection) [preauth] Jan 17 00:05:59.340582 systemd[1]: sshd@62-167.235.246.183:22-185.246.128.171:39519.service: Deactivated successfully. Jan 17 00:06:00.105846 systemd[1]: Started sshd@63-167.235.246.183:22-185.246.128.171:20420.service - OpenSSH per-connection server daemon (185.246.128.171:20420). Jan 17 00:06:01.008172 kubelet[2574]: E0117 00:06:01.008111 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-txw7d" podUID="362a3452-c30b-406b-9bbb-9543b4b09e90" Jan 17 00:06:01.009331 kubelet[2574]: E0117 00:06:01.009168 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798d7c56dc-6ghq5" podUID="3a9d9fee-4b98-43fe-862d-a1e26e86f2ee" Jan 17 00:06:01.619780 sshd[5790]: Invalid user dspace from 185.246.128.171 port 20420 Jan 17 00:06:01.655319 systemd[1]: Started sshd@64-167.235.246.183:22-4.153.228.146:43678.service - OpenSSH per-connection server daemon (4.153.228.146:43678). Jan 17 00:06:01.818272 sshd[5790]: Disconnecting invalid user dspace 185.246.128.171 port 20420: Change of username or service not allowed: (dspace,ssh-connection) -> (fa,ssh-connection) [preauth] Jan 17 00:06:01.822216 systemd[1]: sshd@63-167.235.246.183:22-185.246.128.171:20420.service: Deactivated successfully. Jan 17 00:06:02.006496 kubelet[2574]: E0117 00:06:02.006361 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rctkw" podUID="e730921e-fe6a-4325-b721-055844e798ac" Jan 17 00:06:02.007565 kubelet[2574]: E0117 00:06:02.007526 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d698fdbf4-vwrcc" podUID="a5e03e55-071e-4370-bbe3-a19857cfbfbd" Jan 17 00:06:02.280555 sshd[5793]: Accepted publickey for core from 4.153.228.146 port 43678 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:06:02.283127 sshd[5793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:06:02.295822 systemd-logind[1463]: New session 20 of user core. Jan 17 00:06:02.302772 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:06:02.760624 systemd[1]: Started sshd@65-167.235.246.183:22-185.246.128.171:3203.service - OpenSSH per-connection server daemon (185.246.128.171:3203). Jan 17 00:06:02.820936 sshd[5793]: pam_unix(sshd:session): session closed for user core Jan 17 00:06:02.826819 systemd[1]: sshd@64-167.235.246.183:22-4.153.228.146:43678.service: Deactivated successfully. Jan 17 00:06:02.831287 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:06:02.832494 systemd-logind[1463]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:06:02.834495 systemd-logind[1463]: Removed session 20. Jan 17 00:06:03.596184 sshd[5805]: Invalid user fa from 185.246.128.171 port 3203 Jan 17 00:06:03.794083 sshd[5805]: Disconnecting invalid user fa 185.246.128.171 port 3203: Change of username or service not allowed: (fa,ssh-connection) -> (es2,ssh-connection) [preauth] Jan 17 00:06:03.797073 systemd[1]: sshd@65-167.235.246.183:22-185.246.128.171:3203.service: Deactivated successfully. Jan 17 00:06:04.054128 systemd[1]: Started sshd@66-167.235.246.183:22-185.246.128.171:41045.service - OpenSSH per-connection server daemon (185.246.128.171:41045). Jan 17 00:06:05.004714 kubelet[2574]: E0117 00:06:05.004676 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798d7c56dc-ghv47" podUID="e2865d0a-d4d2-402d-89fc-69d90c7c76b9" Jan 17 00:06:05.105537 sshd[5813]: Invalid user es2 from 185.246.128.171 port 41045 Jan 17 00:06:05.235891 sshd[5813]: Disconnecting invalid user es2 185.246.128.171 port 41045: Change of username or service not allowed: (es2,ssh-connection) -> (user03,ssh-connection) [preauth] Jan 17 00:06:05.236912 systemd[1]: sshd@66-167.235.246.183:22-185.246.128.171:41045.service: Deactivated successfully. Jan 17 00:06:05.932905 systemd[1]: Started sshd@67-167.235.246.183:22-185.246.128.171:39435.service - OpenSSH per-connection server daemon (185.246.128.171:39435). Jan 17 00:06:06.354044 sshd[5818]: Invalid user user03 from 185.246.128.171 port 39435 Jan 17 00:06:06.652675 sshd[5818]: Disconnecting invalid user user03 185.246.128.171 port 39435: Change of username or service not allowed: (user03,ssh-connection) -> (openvswitch,ssh-connection) [preauth] Jan 17 00:06:06.655403 systemd[1]: sshd@67-167.235.246.183:22-185.246.128.171:39435.service: Deactivated successfully. Jan 17 00:06:07.156929 systemd[1]: Started sshd@68-167.235.246.183:22-185.246.128.171:55657.service - OpenSSH per-connection server daemon (185.246.128.171:55657). Jan 17 00:06:07.901072 sshd[5823]: Invalid user openvswitch from 185.246.128.171 port 55657 Jan 17 00:06:07.938894 systemd[1]: Started sshd@69-167.235.246.183:22-4.153.228.146:47942.service - OpenSSH per-connection server daemon (4.153.228.146:47942). Jan 17 00:06:08.387369 sshd[5823]: Disconnecting invalid user openvswitch 185.246.128.171 port 55657: Change of username or service not allowed: (openvswitch,ssh-connection) -> (ddd,ssh-connection) [preauth] Jan 17 00:06:08.390920 systemd[1]: sshd@68-167.235.246.183:22-185.246.128.171:55657.service: Deactivated successfully. Jan 17 00:06:08.549636 sshd[5826]: Accepted publickey for core from 4.153.228.146 port 47942 ssh2: RSA SHA256:+BFNXgSfyeCELC3j4AjLa1v7I98Hfb0KdstUpL3+ysk Jan 17 00:06:08.552602 sshd[5826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:06:08.561019 systemd-logind[1463]: New session 21 of user core. Jan 17 00:06:08.567719 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:06:08.944138 systemd[1]: Started sshd@70-167.235.246.183:22-185.246.128.171:56825.service - OpenSSH per-connection server daemon (185.246.128.171:56825). Jan 17 00:06:09.010611 kubelet[2574]: E0117 00:06:09.010124 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9c4977545-g698v" podUID="1edd65d8-b5e4-447f-a4cd-2de7f77232a4" Jan 17 00:06:09.112224 sshd[5826]: pam_unix(sshd:session): session closed for user core Jan 17 00:06:09.119264 systemd[1]: sshd@69-167.235.246.183:22-4.153.228.146:47942.service: Deactivated successfully. Jan 17 00:06:09.121404 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:06:09.128804 systemd-logind[1463]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:06:09.130524 systemd-logind[1463]: Removed session 21. Jan 17 00:06:09.755463 sshd[5840]: Invalid user ddd from 185.246.128.171 port 56825 Jan 17 00:06:09.815434 sshd[5840]: Disconnecting invalid user ddd 185.246.128.171 port 56825: Change of username or service not allowed: (ddd,ssh-connection) -> (richard,ssh-connection) [preauth] Jan 17 00:06:09.819983 systemd[1]: sshd@70-167.235.246.183:22-185.246.128.171:56825.service: Deactivated successfully. Jan 17 00:06:10.066325 systemd[1]: Started sshd@71-167.235.246.183:22-185.246.128.171:64290.service - OpenSSH per-connection server daemon (185.246.128.171:64290). Jan 17 00:06:12.336942 sshd[5853]: Invalid user richard from 185.246.128.171 port 64290 Jan 17 00:06:14.006016 kubelet[2574]: E0117 00:06:14.005691 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rctkw" podUID="e730921e-fe6a-4325-b721-055844e798ac" Jan 17 00:06:14.204191 sshd[5853]: Disconnecting invalid user richard 185.246.128.171 port 64290: Change of username or service not allowed: (richard,ssh-connection) -> (mohamed,ssh-connection) [preauth] Jan 17 00:06:14.208487 systemd[1]: sshd@71-167.235.246.183:22-185.246.128.171:64290.service: Deactivated successfully. Jan 17 00:06:14.396005 systemd[1]: Started sshd@72-167.235.246.183:22-184.168.21.211:36426.service - OpenSSH per-connection server daemon (184.168.21.211:36426). Jan 17 00:06:14.785014 systemd[1]: Started sshd@73-167.235.246.183:22-185.246.128.171:40268.service - OpenSSH per-connection server daemon (185.246.128.171:40268). Jan 17 00:06:15.007012 kubelet[2574]: E0117 00:06:15.006310 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d698fdbf4-vwrcc" podUID="a5e03e55-071e-4370-bbe3-a19857cfbfbd" Jan 17 00:06:15.007012 kubelet[2574]: E0117 00:06:15.006892 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-txw7d" podUID="362a3452-c30b-406b-9bbb-9543b4b09e90" Jan 17 00:06:15.249713 sshd[5861]: Invalid user mohamed from 185.246.128.171 port 40268 Jan 17 00:06:15.306469 sshd[5858]: Invalid user minikube from 184.168.21.211 port 36426 Jan 17 00:06:15.465165 sshd[5858]: Received disconnect from 184.168.21.211 port 36426:11: Bye Bye [preauth] Jan 17 00:06:15.465165 sshd[5858]: Disconnected from invalid user minikube 184.168.21.211 port 36426 [preauth] Jan 17 00:06:15.469407 systemd[1]: sshd@72-167.235.246.183:22-184.168.21.211:36426.service: Deactivated successfully. Jan 17 00:06:15.565202 sshd[5861]: Disconnecting invalid user mohamed 185.246.128.171 port 40268: Change of username or service not allowed: (mohamed,ssh-connection) -> (xiaoxiao,ssh-connection) [preauth] Jan 17 00:06:15.569028 systemd[1]: sshd@73-167.235.246.183:22-185.246.128.171:40268.service: Deactivated successfully. Jan 17 00:06:15.697838 systemd[1]: Started sshd@74-167.235.246.183:22-185.246.128.171:51879.service - OpenSSH per-connection server daemon (185.246.128.171:51879). Jan 17 00:06:16.005565 kubelet[2574]: E0117 00:06:16.005457 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798d7c56dc-6ghq5" podUID="3a9d9fee-4b98-43fe-862d-a1e26e86f2ee" Jan 17 00:06:17.007061 containerd[1486]: time="2026-01-17T00:06:17.007005814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:06:17.339298 containerd[1486]: time="2026-01-17T00:06:17.339031218Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:06:17.340549 containerd[1486]: time="2026-01-17T00:06:17.340409943Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:06:17.340549 containerd[1486]: time="2026-01-17T00:06:17.340501503Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:06:17.340737 kubelet[2574]: E0117 00:06:17.340693 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:06:17.341111 kubelet[2574]: E0117 00:06:17.340750 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:06:17.341111 kubelet[2574]: E0117 00:06:17.340866 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-798d7c56dc-ghv47_calico-apiserver(e2865d0a-d4d2-402d-89fc-69d90c7c76b9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:06:17.341111 kubelet[2574]: E0117 00:06:17.340908 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798d7c56dc-ghv47" podUID="e2865d0a-d4d2-402d-89fc-69d90c7c76b9" Jan 17 00:06:17.448256 sshd[5870]: Invalid user xiaoxiao from 185.246.128.171 port 51879 Jan 17 00:06:17.533584 sshd[5870]: Disconnecting invalid user xiaoxiao 185.246.128.171 port 51879: Change of username or service not allowed: (xiaoxiao,ssh-connection) -> (aaa,ssh-connection) [preauth] Jan 17 00:06:17.537054 systemd[1]: sshd@74-167.235.246.183:22-185.246.128.171:51879.service: Deactivated successfully. Jan 17 00:06:18.317886 systemd[1]: Started sshd@75-167.235.246.183:22-185.246.128.171:47368.service - OpenSSH per-connection server daemon (185.246.128.171:47368). Jan 17 00:06:20.013425 sshd[5875]: Invalid user aaa from 185.246.128.171 port 47368 Jan 17 00:06:20.052557 sshd[5875]: Disconnecting invalid user aaa 185.246.128.171 port 47368: Change of username or service not allowed: (aaa,ssh-connection) -> (odoo18,ssh-connection) [preauth] Jan 17 00:06:20.054957 systemd[1]: sshd@75-167.235.246.183:22-185.246.128.171:47368.service: Deactivated successfully. Jan 17 00:06:21.129386 systemd[1]: Started sshd@76-167.235.246.183:22-185.246.128.171:45248.service - OpenSSH per-connection server daemon (185.246.128.171:45248). Jan 17 00:06:21.740634 sshd[5884]: Invalid user odoo18 from 185.246.128.171 port 45248 Jan 17 00:06:21.924602 sshd[5884]: Disconnecting invalid user odoo18 185.246.128.171 port 45248: Change of username or service not allowed: (odoo18,ssh-connection) -> (loginuser,ssh-connection) [preauth] Jan 17 00:06:21.927979 systemd[1]: sshd@76-167.235.246.183:22-185.246.128.171:45248.service: Deactivated successfully. Jan 17 00:06:22.340026 systemd[1]: Started sshd@77-167.235.246.183:22-185.246.128.171:50459.service - OpenSSH per-connection server daemon (185.246.128.171:50459). Jan 17 00:06:22.983966 sshd[5889]: Invalid user loginuser from 185.246.128.171 port 50459 Jan 17 00:06:23.419675 sshd[5889]: Disconnecting invalid user loginuser 185.246.128.171 port 50459: Change of username or service not allowed: (loginuser,ssh-connection) -> (astra,ssh-connection) [preauth] Jan 17 00:06:23.423311 systemd[1]: sshd@77-167.235.246.183:22-185.246.128.171:50459.service: Deactivated successfully. Jan 17 00:06:23.656977 systemd[1]: Started sshd@78-167.235.246.183:22-185.246.128.171:3101.service - OpenSSH per-connection server daemon (185.246.128.171:3101). Jan 17 00:06:23.918188 kubelet[2574]: E0117 00:06:23.917940 2574 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:49168->10.0.0.2:2379: read: connection timed out" Jan 17 00:06:24.005747 kubelet[2574]: E0117 00:06:24.005686 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9c4977545-g698v" podUID="1edd65d8-b5e4-447f-a4cd-2de7f77232a4" Jan 17 00:06:24.024971 sshd[5894]: Invalid user astra from 185.246.128.171 port 3101 Jan 17 00:06:24.139476 sshd[5894]: Disconnecting invalid user astra 185.246.128.171 port 3101: Change of username or service not allowed: (astra,ssh-connection) -> (postgres,ssh-connection) [preauth] Jan 17 00:06:24.141891 systemd[1]: sshd@78-167.235.246.183:22-185.246.128.171:3101.service: Deactivated successfully. Jan 17 00:06:24.330703 systemd[1]: cri-containerd-8397795e2ba17b458e645aae1ef980c3dbf7d17466445cf965aac1328cc27019.scope: Deactivated successfully. Jan 17 00:06:24.330982 systemd[1]: cri-containerd-8397795e2ba17b458e645aae1ef980c3dbf7d17466445cf965aac1328cc27019.scope: Consumed 5.062s CPU time, 18.1M memory peak, 0B memory swap peak. Jan 17 00:06:24.373156 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8397795e2ba17b458e645aae1ef980c3dbf7d17466445cf965aac1328cc27019-rootfs.mount: Deactivated successfully. Jan 17 00:06:24.382218 containerd[1486]: time="2026-01-17T00:06:24.382156595Z" level=info msg="shim disconnected" id=8397795e2ba17b458e645aae1ef980c3dbf7d17466445cf965aac1328cc27019 namespace=k8s.io Jan 17 00:06:24.384597 containerd[1486]: time="2026-01-17T00:06:24.382663836Z" level=warning msg="cleaning up after shim disconnected" id=8397795e2ba17b458e645aae1ef980c3dbf7d17466445cf965aac1328cc27019 namespace=k8s.io Jan 17 00:06:24.384597 containerd[1486]: time="2026-01-17T00:06:24.382689556Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:06:24.661027 systemd[1]: Started sshd@79-167.235.246.183:22-185.246.128.171:19921.service - OpenSSH per-connection server daemon (185.246.128.171:19921). Jan 17 00:06:24.845480 systemd[1]: cri-containerd-d759a612e2f170ed855fe6bc51fc31a9c117177508eb348cedb91ff1a9c97fd6.scope: Deactivated successfully. Jan 17 00:06:24.846542 systemd[1]: cri-containerd-d759a612e2f170ed855fe6bc51fc31a9c117177508eb348cedb91ff1a9c97fd6.scope: Consumed 38.140s CPU time. Jan 17 00:06:24.872271 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d759a612e2f170ed855fe6bc51fc31a9c117177508eb348cedb91ff1a9c97fd6-rootfs.mount: Deactivated successfully. Jan 17 00:06:24.874764 containerd[1486]: time="2026-01-17T00:06:24.874702738Z" level=info msg="shim disconnected" id=d759a612e2f170ed855fe6bc51fc31a9c117177508eb348cedb91ff1a9c97fd6 namespace=k8s.io Jan 17 00:06:24.875293 containerd[1486]: time="2026-01-17T00:06:24.874922898Z" level=warning msg="cleaning up after shim disconnected" id=d759a612e2f170ed855fe6bc51fc31a9c117177508eb348cedb91ff1a9c97fd6 namespace=k8s.io Jan 17 00:06:24.875293 containerd[1486]: time="2026-01-17T00:06:24.874939619Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:06:24.958115 kubelet[2574]: I0117 00:06:24.958006 2574 scope.go:117] "RemoveContainer" containerID="8397795e2ba17b458e645aae1ef980c3dbf7d17466445cf965aac1328cc27019" Jan 17 00:06:24.959501 kubelet[2574]: I0117 00:06:24.959371 2574 scope.go:117] "RemoveContainer" containerID="d759a612e2f170ed855fe6bc51fc31a9c117177508eb348cedb91ff1a9c97fd6" Jan 17 00:06:24.961883 containerd[1486]: time="2026-01-17T00:06:24.961795556Z" level=info msg="CreateContainer within sandbox \"2a2b402b74b9624ad9c7d74bdc8e02cb66b35a10107604b40cf0a9ae81312653\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 17 00:06:24.962568 containerd[1486]: time="2026-01-17T00:06:24.962307078Z" level=info msg="CreateContainer within sandbox \"f40bc843b149ecfcec3e42d0a824bbffe271c64bdc7fdb57bb09ca7229f16c5e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 17 00:06:24.983027 containerd[1486]: time="2026-01-17T00:06:24.982983139Z" level=info msg="CreateContainer within sandbox \"2a2b402b74b9624ad9c7d74bdc8e02cb66b35a10107604b40cf0a9ae81312653\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"64ae26142f4de9d035a310105a596794107189e5c1fd3c79d9eb8a90d7fb7760\"" Jan 17 00:06:24.983733 containerd[1486]: time="2026-01-17T00:06:24.983700262Z" level=info msg="StartContainer for \"64ae26142f4de9d035a310105a596794107189e5c1fd3c79d9eb8a90d7fb7760\"" Jan 17 00:06:24.985184 containerd[1486]: time="2026-01-17T00:06:24.984328703Z" level=info msg="CreateContainer within sandbox \"f40bc843b149ecfcec3e42d0a824bbffe271c64bdc7fdb57bb09ca7229f16c5e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"af1425c13e4672de4833f7dd4bdfe449a4fcb6d153978bc41dbac501b1341b6a\"" Jan 17 00:06:24.985184 containerd[1486]: time="2026-01-17T00:06:24.985099466Z" level=info msg="StartContainer for \"af1425c13e4672de4833f7dd4bdfe449a4fcb6d153978bc41dbac501b1341b6a\"" Jan 17 00:06:25.023725 systemd[1]: Started cri-containerd-64ae26142f4de9d035a310105a596794107189e5c1fd3c79d9eb8a90d7fb7760.scope - libcontainer container 64ae26142f4de9d035a310105a596794107189e5c1fd3c79d9eb8a90d7fb7760. Jan 17 00:06:25.028644 systemd[1]: Started cri-containerd-af1425c13e4672de4833f7dd4bdfe449a4fcb6d153978bc41dbac501b1341b6a.scope - libcontainer container af1425c13e4672de4833f7dd4bdfe449a4fcb6d153978bc41dbac501b1341b6a. Jan 17 00:06:25.069865 containerd[1486]: time="2026-01-17T00:06:25.069759834Z" level=info msg="StartContainer for \"64ae26142f4de9d035a310105a596794107189e5c1fd3c79d9eb8a90d7fb7760\" returns successfully" Jan 17 00:06:25.078277 containerd[1486]: time="2026-01-17T00:06:25.077924938Z" level=info msg="StartContainer for \"af1425c13e4672de4833f7dd4bdfe449a4fcb6d153978bc41dbac501b1341b6a\" returns successfully" Jan 17 00:06:25.565419 sshd[5921]: Invalid user postgres from 185.246.128.171 port 19921 Jan 17 00:06:26.007715 containerd[1486]: time="2026-01-17T00:06:26.007474262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:06:26.009580 kubelet[2574]: E0117 00:06:26.009458 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-txw7d" podUID="362a3452-c30b-406b-9bbb-9543b4b09e90" Jan 17 00:06:26.351622 containerd[1486]: time="2026-01-17T00:06:26.351408657Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:06:26.352881 containerd[1486]: time="2026-01-17T00:06:26.352811581Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:06:26.353027 containerd[1486]: time="2026-01-17T00:06:26.352962981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:06:26.353305 kubelet[2574]: E0117 00:06:26.353163 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:06:26.353305 kubelet[2574]: E0117 00:06:26.353237 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:06:26.353668 kubelet[2574]: E0117 00:06:26.353478 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-rctkw_calico-system(e730921e-fe6a-4325-b721-055844e798ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:06:26.354312 containerd[1486]: time="2026-01-17T00:06:26.354113265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:06:26.695262 containerd[1486]: time="2026-01-17T00:06:26.695115331Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:06:26.697553 containerd[1486]: time="2026-01-17T00:06:26.697359657Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:06:26.697553 containerd[1486]: time="2026-01-17T00:06:26.697431298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:06:26.697814 kubelet[2574]: E0117 00:06:26.697703 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:06:26.697814 kubelet[2574]: E0117 00:06:26.697758 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:06:26.698201 kubelet[2574]: E0117 00:06:26.698093 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7d698fdbf4-vwrcc_calico-system(a5e03e55-071e-4370-bbe3-a19857cfbfbd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:06:26.698201 kubelet[2574]: E0117 00:06:26.698145 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d698fdbf4-vwrcc" podUID="a5e03e55-071e-4370-bbe3-a19857cfbfbd" Jan 17 00:06:26.698346 containerd[1486]: time="2026-01-17T00:06:26.698182140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:06:27.038207 containerd[1486]: time="2026-01-17T00:06:27.038008201Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:06:27.040040 containerd[1486]: time="2026-01-17T00:06:27.039854286Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:06:27.040214 containerd[1486]: time="2026-01-17T00:06:27.040052687Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:06:27.040429 kubelet[2574]: E0117 00:06:27.040327 2574 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:06:27.040429 kubelet[2574]: E0117 00:06:27.040383 2574 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:06:27.041081 kubelet[2574]: E0117 00:06:27.040635 2574 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-rctkw_calico-system(e730921e-fe6a-4325-b721-055844e798ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:06:27.041081 kubelet[2574]: E0117 00:06:27.040690 2574 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rctkw" podUID="e730921e-fe6a-4325-b721-055844e798ac" Jan 17 00:06:28.227104 sshd[5921]: Disconnecting invalid user postgres 185.246.128.171 port 19921: Change of username or service not allowed: (postgres,ssh-connection) -> (manager,ssh-connection) [preauth] Jan 17 00:06:28.230102 systemd[1]: sshd@79-167.235.246.183:22-185.246.128.171:19921.service: Deactivated successfully. Jan 17 00:06:28.452752 systemd[1]: cri-containerd-bbf44df4c1ca1dd1b0b8ea7068c31ee9c4b6bdf9c176afcf6d2bda8e1ec7d3fc.scope: Deactivated successfully. Jan 17 00:06:28.454866 systemd[1]: cri-containerd-bbf44df4c1ca1dd1b0b8ea7068c31ee9c4b6bdf9c176afcf6d2bda8e1ec7d3fc.scope: Consumed 3.838s CPU time, 16.2M memory peak, 0B memory swap peak. Jan 17 00:06:28.477068 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbf44df4c1ca1dd1b0b8ea7068c31ee9c4b6bdf9c176afcf6d2bda8e1ec7d3fc-rootfs.mount: Deactivated successfully. Jan 17 00:06:28.488866 containerd[1486]: time="2026-01-17T00:06:28.488802203Z" level=info msg="shim disconnected" id=bbf44df4c1ca1dd1b0b8ea7068c31ee9c4b6bdf9c176afcf6d2bda8e1ec7d3fc namespace=k8s.io Jan 17 00:06:28.488866 containerd[1486]: time="2026-01-17T00:06:28.488860883Z" level=warning msg="cleaning up after shim disconnected" id=bbf44df4c1ca1dd1b0b8ea7068c31ee9c4b6bdf9c176afcf6d2bda8e1ec7d3fc namespace=k8s.io Jan 17 00:06:28.488866 containerd[1486]: time="2026-01-17T00:06:28.488871563Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:06:28.853758 systemd[1]: Started sshd@80-167.235.246.183:22-185.246.128.171:49410.service - OpenSSH per-connection server daemon (185.246.128.171:49410). Jan 17 00:06:28.979880 kubelet[2574]: I0117 00:06:28.979841 2574 scope.go:117] "RemoveContainer" containerID="bbf44df4c1ca1dd1b0b8ea7068c31ee9c4b6bdf9c176afcf6d2bda8e1ec7d3fc" Jan 17 00:06:28.982478 containerd[1486]: time="2026-01-17T00:06:28.982421993Z" level=info msg="CreateContainer within sandbox \"0cd3ed69c40727955fbe858084442ec0e1a248105601922126aaf776e04275b9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 17 00:06:28.996741 containerd[1486]: time="2026-01-17T00:06:28.996603633Z" level=info msg="CreateContainer within sandbox \"0cd3ed69c40727955fbe858084442ec0e1a248105601922126aaf776e04275b9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"b9d736ea35da6471c6ac1ccbac85c593b88ab86647a8a4fd719b844cf705316c\"" Jan 17 00:06:28.997768 containerd[1486]: time="2026-01-17T00:06:28.997166115Z" level=info msg="StartContainer for \"b9d736ea35da6471c6ac1ccbac85c593b88ab86647a8a4fd719b844cf705316c\"" Jan 17 00:06:29.033831 systemd[1]: Started cri-containerd-b9d736ea35da6471c6ac1ccbac85c593b88ab86647a8a4fd719b844cf705316c.scope - libcontainer container b9d736ea35da6471c6ac1ccbac85c593b88ab86647a8a4fd719b844cf705316c. Jan 17 00:06:29.069603 containerd[1486]: time="2026-01-17T00:06:29.069488236Z" level=info msg="StartContainer for \"b9d736ea35da6471c6ac1ccbac85c593b88ab86647a8a4fd719b844cf705316c\" returns successfully"