Jan 20 01:40:37.169894 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 20 01:40:37.169916 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Jan 19 23:25:42 -00 2026 Jan 20 01:40:37.169924 kernel: KASLR enabled Jan 20 01:40:37.169930 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 20 01:40:37.169937 kernel: printk: bootconsole [pl11] enabled Jan 20 01:40:37.169943 kernel: efi: EFI v2.7 by EDK II Jan 20 01:40:37.169950 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 20 01:40:37.169956 kernel: random: crng init done Jan 20 01:40:37.169962 kernel: ACPI: Early table checksum verification disabled Jan 20 01:40:37.169968 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 20 01:40:37.169975 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:37.169980 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:37.169988 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 20 01:40:37.169994 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:37.170002 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:37.170008 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:37.170014 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:37.170022 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:37.170028 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:37.170035 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 20 01:40:37.170041 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:37.170048 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 20 01:40:37.170054 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 20 01:40:37.170060 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 20 01:40:37.170067 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 20 01:40:37.170073 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 20 01:40:37.170079 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 20 01:40:37.170086 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 20 01:40:37.170094 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 20 01:40:37.170100 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 20 01:40:37.170106 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 20 01:40:37.170113 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 20 01:40:37.170122 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 20 01:40:37.170128 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 20 01:40:37.170135 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 20 01:40:37.170148 kernel: Zone ranges: Jan 20 01:40:37.170154 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 20 01:40:37.170160 kernel: DMA32 empty Jan 20 01:40:37.170167 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 20 01:40:37.170173 kernel: Movable zone start for each node Jan 20 01:40:37.170184 kernel: Early memory node ranges Jan 20 01:40:37.170191 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 20 01:40:37.170198 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 20 01:40:37.170204 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 20 01:40:37.170211 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 20 01:40:37.170219 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 20 01:40:37.170226 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 20 01:40:37.170233 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 20 01:40:37.170240 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 20 01:40:37.170247 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 20 01:40:37.170253 kernel: psci: probing for conduit method from ACPI. Jan 20 01:40:37.170260 kernel: psci: PSCIv1.1 detected in firmware. Jan 20 01:40:37.170267 kernel: psci: Using standard PSCI v0.2 function IDs Jan 20 01:40:37.170273 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 20 01:40:37.170280 kernel: psci: SMC Calling Convention v1.4 Jan 20 01:40:37.170287 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 20 01:40:37.170294 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 20 01:40:37.170302 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 20 01:40:37.170308 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 20 01:40:37.170315 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 20 01:40:37.170322 kernel: Detected PIPT I-cache on CPU0 Jan 20 01:40:37.170329 kernel: CPU features: detected: GIC system register CPU interface Jan 20 01:40:37.170335 kernel: CPU features: detected: Hardware dirty bit management Jan 20 01:40:37.170342 kernel: CPU features: detected: Spectre-BHB Jan 20 01:40:37.170349 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 20 01:40:37.170355 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 20 01:40:37.170362 kernel: CPU features: detected: ARM erratum 1418040 Jan 20 01:40:37.170369 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 20 01:40:37.170377 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 20 01:40:37.170383 kernel: alternatives: applying boot alternatives Jan 20 01:40:37.170392 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=93b7c0065a09ec71bf84c247be021b0de512ae4ddd93f3ff0c2b7b260332752d Jan 20 01:40:37.170399 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 01:40:37.170406 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 01:40:37.170412 kernel: Fallback order for Node 0: 0 Jan 20 01:40:37.170419 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 20 01:40:37.170426 kernel: Policy zone: Normal Jan 20 01:40:37.170433 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 01:40:37.170439 kernel: software IO TLB: area num 2. Jan 20 01:40:37.170446 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 20 01:40:37.170455 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Jan 20 01:40:37.170462 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 20 01:40:37.170469 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 01:40:37.170476 kernel: rcu: RCU event tracing is enabled. Jan 20 01:40:37.170483 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 20 01:40:37.170490 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 01:40:37.170497 kernel: Tracing variant of Tasks RCU enabled. Jan 20 01:40:37.170504 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 01:40:37.170511 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 20 01:40:37.170517 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 20 01:40:37.170524 kernel: GICv3: 960 SPIs implemented Jan 20 01:40:37.170532 kernel: GICv3: 0 Extended SPIs implemented Jan 20 01:40:37.170539 kernel: Root IRQ handler: gic_handle_irq Jan 20 01:40:37.170546 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 20 01:40:37.170552 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 20 01:40:37.170559 kernel: ITS: No ITS available, not enabling LPIs Jan 20 01:40:37.170566 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 01:40:37.170573 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 20 01:40:37.170579 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 20 01:40:37.170586 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 20 01:40:37.170593 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 20 01:40:37.170600 kernel: Console: colour dummy device 80x25 Jan 20 01:40:37.170609 kernel: printk: console [tty1] enabled Jan 20 01:40:37.170616 kernel: ACPI: Core revision 20230628 Jan 20 01:40:37.170623 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 20 01:40:37.170630 kernel: pid_max: default: 32768 minimum: 301 Jan 20 01:40:37.170637 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 20 01:40:37.170644 kernel: landlock: Up and running. Jan 20 01:40:37.170651 kernel: SELinux: Initializing. Jan 20 01:40:37.170658 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 01:40:37.170665 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 01:40:37.170673 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 01:40:37.170681 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 01:40:37.170688 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Jan 20 01:40:37.170694 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 20 01:40:37.170701 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 20 01:40:37.170708 kernel: rcu: Hierarchical SRCU implementation. Jan 20 01:40:37.170715 kernel: rcu: Max phase no-delay instances is 400. Jan 20 01:40:37.170722 kernel: Remapping and enabling EFI services. Jan 20 01:40:37.170735 kernel: smp: Bringing up secondary CPUs ... Jan 20 01:40:37.170742 kernel: Detected PIPT I-cache on CPU1 Jan 20 01:40:37.170749 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 20 01:40:37.170757 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 20 01:40:37.170766 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 20 01:40:37.170773 kernel: smp: Brought up 1 node, 2 CPUs Jan 20 01:40:37.170781 kernel: SMP: Total of 2 processors activated. Jan 20 01:40:37.170788 kernel: CPU features: detected: 32-bit EL0 Support Jan 20 01:40:37.170796 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 20 01:40:37.170805 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 20 01:40:37.170812 kernel: CPU features: detected: CRC32 instructions Jan 20 01:40:37.170819 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 20 01:40:37.170826 kernel: CPU features: detected: LSE atomic instructions Jan 20 01:40:37.170834 kernel: CPU features: detected: Privileged Access Never Jan 20 01:40:37.170841 kernel: CPU: All CPU(s) started at EL1 Jan 20 01:40:37.170848 kernel: alternatives: applying system-wide alternatives Jan 20 01:40:37.170855 kernel: devtmpfs: initialized Jan 20 01:40:37.170862 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 01:40:37.170871 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 20 01:40:37.170878 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 01:40:37.170885 kernel: SMBIOS 3.1.0 present. Jan 20 01:40:37.170893 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 20 01:40:37.170900 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 01:40:37.170908 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 20 01:40:37.170915 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 20 01:40:37.170922 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 20 01:40:37.170930 kernel: audit: initializing netlink subsys (disabled) Jan 20 01:40:37.170938 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 20 01:40:37.170946 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 01:40:37.170953 kernel: cpuidle: using governor menu Jan 20 01:40:37.170960 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 20 01:40:37.170967 kernel: ASID allocator initialised with 32768 entries Jan 20 01:40:37.170975 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 01:40:37.170982 kernel: Serial: AMBA PL011 UART driver Jan 20 01:40:37.170989 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 20 01:40:37.170997 kernel: Modules: 0 pages in range for non-PLT usage Jan 20 01:40:37.171005 kernel: Modules: 509008 pages in range for PLT usage Jan 20 01:40:37.171013 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 01:40:37.171020 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 01:40:37.171027 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 20 01:40:37.171035 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 20 01:40:37.171042 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 01:40:37.171049 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 01:40:37.171056 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 20 01:40:37.171064 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 20 01:40:37.171072 kernel: ACPI: Added _OSI(Module Device) Jan 20 01:40:37.171080 kernel: ACPI: Added _OSI(Processor Device) Jan 20 01:40:37.171087 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 01:40:37.171094 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 01:40:37.171101 kernel: ACPI: Interpreter enabled Jan 20 01:40:37.171108 kernel: ACPI: Using GIC for interrupt routing Jan 20 01:40:37.171116 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 20 01:40:37.171123 kernel: printk: console [ttyAMA0] enabled Jan 20 01:40:37.171130 kernel: printk: bootconsole [pl11] disabled Jan 20 01:40:37.173166 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 20 01:40:37.173180 kernel: iommu: Default domain type: Translated Jan 20 01:40:37.173188 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 20 01:40:37.173195 kernel: efivars: Registered efivars operations Jan 20 01:40:37.173202 kernel: vgaarb: loaded Jan 20 01:40:37.173210 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 20 01:40:37.173217 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 01:40:37.173225 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 01:40:37.173233 kernel: pnp: PnP ACPI init Jan 20 01:40:37.173245 kernel: pnp: PnP ACPI: found 0 devices Jan 20 01:40:37.173253 kernel: NET: Registered PF_INET protocol family Jan 20 01:40:37.173260 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 01:40:37.173268 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 01:40:37.173275 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 01:40:37.173283 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 01:40:37.173290 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 01:40:37.173297 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 01:40:37.173305 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 01:40:37.173314 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 01:40:37.173321 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 01:40:37.173328 kernel: PCI: CLS 0 bytes, default 64 Jan 20 01:40:37.173336 kernel: kvm [1]: HYP mode not available Jan 20 01:40:37.173343 kernel: Initialise system trusted keyrings Jan 20 01:40:37.173350 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 01:40:37.173357 kernel: Key type asymmetric registered Jan 20 01:40:37.173364 kernel: Asymmetric key parser 'x509' registered Jan 20 01:40:37.173372 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 20 01:40:37.173380 kernel: io scheduler mq-deadline registered Jan 20 01:40:37.173388 kernel: io scheduler kyber registered Jan 20 01:40:37.173395 kernel: io scheduler bfq registered Jan 20 01:40:37.173402 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 01:40:37.173410 kernel: thunder_xcv, ver 1.0 Jan 20 01:40:37.173417 kernel: thunder_bgx, ver 1.0 Jan 20 01:40:37.173424 kernel: nicpf, ver 1.0 Jan 20 01:40:37.173431 kernel: nicvf, ver 1.0 Jan 20 01:40:37.173573 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 20 01:40:37.173650 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-20T01:40:36 UTC (1768873236) Jan 20 01:40:37.173660 kernel: efifb: probing for efifb Jan 20 01:40:37.173668 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 20 01:40:37.173675 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 20 01:40:37.173682 kernel: efifb: scrolling: redraw Jan 20 01:40:37.173690 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 20 01:40:37.173697 kernel: Console: switching to colour frame buffer device 128x48 Jan 20 01:40:37.173704 kernel: fb0: EFI VGA frame buffer device Jan 20 01:40:37.173714 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 20 01:40:37.173722 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 20 01:40:37.173730 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Jan 20 01:40:37.173737 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 20 01:40:37.173744 kernel: watchdog: Hard watchdog permanently disabled Jan 20 01:40:37.173752 kernel: NET: Registered PF_INET6 protocol family Jan 20 01:40:37.173759 kernel: Segment Routing with IPv6 Jan 20 01:40:37.173766 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 01:40:37.173773 kernel: NET: Registered PF_PACKET protocol family Jan 20 01:40:37.173782 kernel: Key type dns_resolver registered Jan 20 01:40:37.173789 kernel: registered taskstats version 1 Jan 20 01:40:37.173796 kernel: Loading compiled-in X.509 certificates Jan 20 01:40:37.173804 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 78d001f5b2e422df1e406698b80c7183ecdd19cf' Jan 20 01:40:37.173811 kernel: Key type .fscrypt registered Jan 20 01:40:37.173818 kernel: Key type fscrypt-provisioning registered Jan 20 01:40:37.173825 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 01:40:37.173833 kernel: ima: Allocated hash algorithm: sha1 Jan 20 01:40:37.173840 kernel: ima: No architecture policies found Jan 20 01:40:37.173849 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 20 01:40:37.173856 kernel: clk: Disabling unused clocks Jan 20 01:40:37.173863 kernel: Freeing unused kernel memory: 39424K Jan 20 01:40:37.173871 kernel: Run /init as init process Jan 20 01:40:37.173878 kernel: with arguments: Jan 20 01:40:37.173885 kernel: /init Jan 20 01:40:37.173892 kernel: with environment: Jan 20 01:40:37.173899 kernel: HOME=/ Jan 20 01:40:37.173906 kernel: TERM=linux Jan 20 01:40:37.173916 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 20 01:40:37.173927 systemd[1]: Detected virtualization microsoft. Jan 20 01:40:37.173934 systemd[1]: Detected architecture arm64. Jan 20 01:40:37.173942 systemd[1]: Running in initrd. Jan 20 01:40:37.173949 systemd[1]: No hostname configured, using default hostname. Jan 20 01:40:37.173957 systemd[1]: Hostname set to . Jan 20 01:40:37.173965 systemd[1]: Initializing machine ID from random generator. Jan 20 01:40:37.173975 systemd[1]: Queued start job for default target initrd.target. Jan 20 01:40:37.173983 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:40:37.173991 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:40:37.173999 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 01:40:37.174008 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 01:40:37.174016 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 01:40:37.174024 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 01:40:37.174034 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 20 01:40:37.174043 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 20 01:40:37.174051 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:40:37.174059 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:40:37.174067 systemd[1]: Reached target paths.target - Path Units. Jan 20 01:40:37.174075 systemd[1]: Reached target slices.target - Slice Units. Jan 20 01:40:37.174083 systemd[1]: Reached target swap.target - Swaps. Jan 20 01:40:37.174090 systemd[1]: Reached target timers.target - Timer Units. Jan 20 01:40:37.174099 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 01:40:37.174108 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 01:40:37.174116 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 01:40:37.174124 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 20 01:40:37.174132 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:40:37.174151 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 01:40:37.174160 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:40:37.174168 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 01:40:37.174176 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 01:40:37.174187 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 01:40:37.174195 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 01:40:37.174202 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 01:40:37.174210 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 01:40:37.174218 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 01:40:37.174244 systemd-journald[218]: Collecting audit messages is disabled. Jan 20 01:40:37.174265 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:40:37.174273 systemd-journald[218]: Journal started Jan 20 01:40:37.174291 systemd-journald[218]: Runtime Journal (/run/log/journal/f852d12ffe904b619fa0ebaddb51ef5b) is 8.0M, max 78.5M, 70.5M free. Jan 20 01:40:37.175552 systemd-modules-load[219]: Inserted module 'overlay' Jan 20 01:40:37.199198 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 01:40:37.199221 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 01:40:37.203070 systemd-modules-load[219]: Inserted module 'br_netfilter' Jan 20 01:40:37.208370 kernel: Bridge firewalling registered Jan 20 01:40:37.206938 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 01:40:37.217338 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:40:37.222949 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 01:40:37.231206 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 01:40:37.238802 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:40:37.257331 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 01:40:37.263486 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 01:40:37.282905 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 01:40:37.297242 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 01:40:37.310158 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:40:37.321233 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:40:37.325972 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 01:40:37.335511 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:40:37.355322 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 01:40:37.365289 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 01:40:37.378229 dracut-cmdline[252]: dracut-dracut-053 Jan 20 01:40:37.391496 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=93b7c0065a09ec71bf84c247be021b0de512ae4ddd93f3ff0c2b7b260332752d Jan 20 01:40:37.382351 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 01:40:37.399354 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:40:37.438128 systemd-resolved[256]: Positive Trust Anchors: Jan 20 01:40:37.438164 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 01:40:37.438196 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 01:40:37.440958 systemd-resolved[256]: Defaulting to hostname 'linux'. Jan 20 01:40:37.447210 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 01:40:37.452370 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:40:37.540150 kernel: SCSI subsystem initialized Jan 20 01:40:37.546158 kernel: Loading iSCSI transport class v2.0-870. Jan 20 01:40:37.556162 kernel: iscsi: registered transport (tcp) Jan 20 01:40:37.572216 kernel: iscsi: registered transport (qla4xxx) Jan 20 01:40:37.572253 kernel: QLogic iSCSI HBA Driver Jan 20 01:40:37.610561 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 01:40:37.621392 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 01:40:37.653298 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 01:40:37.653351 kernel: device-mapper: uevent: version 1.0.3 Jan 20 01:40:37.658650 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 20 01:40:37.704159 kernel: raid6: neonx8 gen() 15817 MB/s Jan 20 01:40:37.724144 kernel: raid6: neonx4 gen() 15685 MB/s Jan 20 01:40:37.743142 kernel: raid6: neonx2 gen() 13278 MB/s Jan 20 01:40:37.762142 kernel: raid6: neonx1 gen() 10504 MB/s Jan 20 01:40:37.782143 kernel: raid6: int64x8 gen() 6978 MB/s Jan 20 01:40:37.801150 kernel: raid6: int64x4 gen() 7360 MB/s Jan 20 01:40:37.820152 kernel: raid6: int64x2 gen() 6147 MB/s Jan 20 01:40:37.842699 kernel: raid6: int64x1 gen() 5071 MB/s Jan 20 01:40:37.842753 kernel: raid6: using algorithm neonx8 gen() 15817 MB/s Jan 20 01:40:37.865085 kernel: raid6: .... xor() 12049 MB/s, rmw enabled Jan 20 01:40:37.865109 kernel: raid6: using neon recovery algorithm Jan 20 01:40:37.874871 kernel: xor: measuring software checksum speed Jan 20 01:40:37.874904 kernel: 8regs : 19793 MB/sec Jan 20 01:40:37.880673 kernel: 32regs : 19092 MB/sec Jan 20 01:40:37.880702 kernel: arm64_neon : 26866 MB/sec Jan 20 01:40:37.884067 kernel: xor: using function: arm64_neon (26866 MB/sec) Jan 20 01:40:37.934350 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 01:40:37.944545 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 01:40:37.957287 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:40:37.976793 systemd-udevd[438]: Using default interface naming scheme 'v255'. Jan 20 01:40:37.980010 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:40:38.000282 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 01:40:38.014400 dracut-pre-trigger[444]: rd.md=0: removing MD RAID activation Jan 20 01:40:38.041767 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 01:40:38.054278 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 01:40:38.089556 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:40:38.109712 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 01:40:38.128180 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 01:40:38.138877 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 01:40:38.148228 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:40:38.157418 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 01:40:38.173320 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 01:40:38.189965 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 01:40:38.220709 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 01:40:38.229382 kernel: hv_vmbus: Vmbus version:5.3 Jan 20 01:40:38.220845 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:40:38.250047 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 01:40:38.268245 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 20 01:40:38.268268 kernel: hv_vmbus: registering driver hv_netvsc Jan 20 01:40:38.268278 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 20 01:40:38.258064 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:40:38.283604 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 20 01:40:38.258246 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:40:38.297214 kernel: hv_vmbus: registering driver hv_storvsc Jan 20 01:40:38.297234 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 20 01:40:38.279589 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:40:38.316626 kernel: hv_vmbus: registering driver hid_hyperv Jan 20 01:40:38.316671 kernel: scsi host1: storvsc_host_t Jan 20 01:40:38.316992 kernel: PTP clock support registered Jan 20 01:40:38.313371 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:40:38.359931 kernel: scsi host0: storvsc_host_t Jan 20 01:40:38.360084 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 20 01:40:38.360117 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 20 01:40:38.360129 kernel: hv_netvsc 7ced8d87-9f2e-7ced-8d87-9f2e7ced8d87 eth0: VF slot 1 added Jan 20 01:40:38.360243 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 20 01:40:38.360330 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 20 01:40:38.349038 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:40:38.379025 kernel: hv_utils: Registering HyperV Utility Driver Jan 20 01:40:38.379044 kernel: hv_vmbus: registering driver hv_pci Jan 20 01:40:38.349153 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:40:38.390740 kernel: hv_pci 9b7f7be9-f695-47a8-a2f3-aaf9f19a1588: PCI VMBus probing: Using version 0x10004 Jan 20 01:40:38.391837 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:40:38.404857 kernel: hv_vmbus: registering driver hv_utils Jan 20 01:40:38.411153 kernel: hv_utils: Heartbeat IC version 3.0 Jan 20 01:40:38.411181 kernel: hv_utils: Shutdown IC version 3.2 Jan 20 01:40:38.416696 kernel: hv_pci 9b7f7be9-f695-47a8-a2f3-aaf9f19a1588: PCI host bridge to bus f695:00 Jan 20 01:40:38.416838 kernel: hv_utils: TimeSync IC version 4.0 Jan 20 01:40:38.478417 systemd-resolved[256]: Clock change detected. Flushing caches. Jan 20 01:40:38.486384 kernel: pci_bus f695:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 20 01:40:38.486535 kernel: pci_bus f695:00: No busn resource found for root bus, will use [bus 00-ff] Jan 20 01:40:38.490910 kernel: pci f695:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 20 01:40:38.510231 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 20 01:40:38.510407 kernel: pci f695:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 20 01:40:38.510433 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 01:40:38.510443 kernel: pci f695:00:02.0: enabling Extended Tags Jan 20 01:40:38.515196 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:40:38.544397 kernel: pci f695:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f695:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 20 01:40:38.544559 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 20 01:40:38.544665 kernel: pci_bus f695:00: busn_res: [bus 00-ff] end is updated to 00 Jan 20 01:40:38.544766 kernel: pci f695:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 20 01:40:38.548051 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 01:40:38.577910 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 20 01:40:38.578154 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 20 01:40:38.578283 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 20 01:40:38.590920 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 20 01:40:38.591109 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 20 01:40:38.591207 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 20 01:40:38.605244 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 20 01:40:38.605291 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 20 01:40:38.609219 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:40:38.631835 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#163 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 20 01:40:38.650352 kernel: mlx5_core f695:00:02.0: enabling device (0000 -> 0002) Jan 20 01:40:38.655908 kernel: mlx5_core f695:00:02.0: firmware version: 16.30.5026 Jan 20 01:40:38.849307 kernel: hv_netvsc 7ced8d87-9f2e-7ced-8d87-9f2e7ced8d87 eth0: VF registering: eth1 Jan 20 01:40:38.849520 kernel: mlx5_core f695:00:02.0 eth1: joined to eth0 Jan 20 01:40:38.855946 kernel: mlx5_core f695:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 20 01:40:38.864917 kernel: mlx5_core f695:00:02.0 enP63125s1: renamed from eth1 Jan 20 01:40:39.103730 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 20 01:40:39.120993 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (500) Jan 20 01:40:39.136761 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 20 01:40:39.171667 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 20 01:40:39.199915 kernel: BTRFS: device fsid ea3e8495-ec03-40ca-9b09-0f7e2a4e9620 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (507) Jan 20 01:40:39.213457 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 20 01:40:39.219218 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 20 01:40:39.247101 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 01:40:39.265914 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 20 01:40:39.271918 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 20 01:40:40.284969 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 20 01:40:40.285631 disk-uuid[614]: The operation has completed successfully. Jan 20 01:40:40.349120 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 01:40:40.350917 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 01:40:40.386024 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 20 01:40:40.396342 sh[728]: Success Jan 20 01:40:40.426006 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 20 01:40:40.703526 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 20 01:40:40.709329 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 20 01:40:40.721009 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 20 01:40:40.748548 kernel: BTRFS info (device dm-0): first mount of filesystem ea3e8495-ec03-40ca-9b09-0f7e2a4e9620 Jan 20 01:40:40.748592 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 20 01:40:40.753840 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 20 01:40:40.757594 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 01:40:40.760781 kernel: BTRFS info (device dm-0): using free space tree Jan 20 01:40:41.089976 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 20 01:40:41.094675 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 01:40:41.118199 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 01:40:41.125055 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 01:40:41.156572 kernel: BTRFS info (device sda6): first mount of filesystem a80e435f-767b-4927-acd1-02c9e9018349 Jan 20 01:40:41.156612 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 20 01:40:41.159970 kernel: BTRFS info (device sda6): using free space tree Jan 20 01:40:41.199027 kernel: BTRFS info (device sda6): auto enabling async discard Jan 20 01:40:41.206735 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 20 01:40:41.215936 kernel: BTRFS info (device sda6): last unmount of filesystem a80e435f-767b-4927-acd1-02c9e9018349 Jan 20 01:40:41.221592 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 01:40:41.240089 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 01:40:41.246922 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 01:40:41.262292 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 01:40:41.292866 systemd-networkd[912]: lo: Link UP Jan 20 01:40:41.292876 systemd-networkd[912]: lo: Gained carrier Jan 20 01:40:41.294535 systemd-networkd[912]: Enumeration completed Jan 20 01:40:41.294613 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 01:40:41.300624 systemd[1]: Reached target network.target - Network. Jan 20 01:40:41.303748 systemd-networkd[912]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:40:41.303751 systemd-networkd[912]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 01:40:41.376915 kernel: mlx5_core f695:00:02.0 enP63125s1: Link up Jan 20 01:40:41.414042 kernel: hv_netvsc 7ced8d87-9f2e-7ced-8d87-9f2e7ced8d87 eth0: Data path switched to VF: enP63125s1 Jan 20 01:40:41.413718 systemd-networkd[912]: enP63125s1: Link UP Jan 20 01:40:41.413799 systemd-networkd[912]: eth0: Link UP Jan 20 01:40:41.413937 systemd-networkd[912]: eth0: Gained carrier Jan 20 01:40:41.413945 systemd-networkd[912]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:40:41.432090 systemd-networkd[912]: enP63125s1: Gained carrier Jan 20 01:40:41.441927 systemd-networkd[912]: eth0: DHCPv4 address 10.200.20.33/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 20 01:40:42.046399 ignition[907]: Ignition 2.19.0 Jan 20 01:40:42.046413 ignition[907]: Stage: fetch-offline Jan 20 01:40:42.046447 ignition[907]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:40:42.053525 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 01:40:42.046456 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:40:42.046556 ignition[907]: parsed url from cmdline: "" Jan 20 01:40:42.046559 ignition[907]: no config URL provided Jan 20 01:40:42.046564 ignition[907]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 01:40:42.072155 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 20 01:40:42.046574 ignition[907]: no config at "/usr/lib/ignition/user.ign" Jan 20 01:40:42.046580 ignition[907]: failed to fetch config: resource requires networking Jan 20 01:40:42.049831 ignition[907]: Ignition finished successfully Jan 20 01:40:42.090782 ignition[921]: Ignition 2.19.0 Jan 20 01:40:42.090787 ignition[921]: Stage: fetch Jan 20 01:40:42.091014 ignition[921]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:40:42.091023 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:40:42.091126 ignition[921]: parsed url from cmdline: "" Jan 20 01:40:42.091129 ignition[921]: no config URL provided Jan 20 01:40:42.091137 ignition[921]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 01:40:42.091150 ignition[921]: no config at "/usr/lib/ignition/user.ign" Jan 20 01:40:42.091169 ignition[921]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 20 01:40:42.198379 ignition[921]: GET result: OK Jan 20 01:40:42.198469 ignition[921]: config has been read from IMDS userdata Jan 20 01:40:42.198512 ignition[921]: parsing config with SHA512: 6cc937a13ec233d5bb00a81c220f829bd61f1ae980ab451c65de0b68d45d89ab022e3ebb9961d664fe729e7ebc53e7d7f393887b5620d66bdf3b24822b509e61 Jan 20 01:40:42.202364 unknown[921]: fetched base config from "system" Jan 20 01:40:42.202708 ignition[921]: fetch: fetch complete Jan 20 01:40:42.202371 unknown[921]: fetched base config from "system" Jan 20 01:40:42.202712 ignition[921]: fetch: fetch passed Jan 20 01:40:42.202376 unknown[921]: fetched user config from "azure" Jan 20 01:40:42.202753 ignition[921]: Ignition finished successfully Jan 20 01:40:42.206305 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 20 01:40:42.226021 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 01:40:42.244206 ignition[928]: Ignition 2.19.0 Jan 20 01:40:42.244216 ignition[928]: Stage: kargs Jan 20 01:40:42.244417 ignition[928]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:40:42.250576 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 01:40:42.244427 ignition[928]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:40:42.245631 ignition[928]: kargs: kargs passed Jan 20 01:40:42.245672 ignition[928]: Ignition finished successfully Jan 20 01:40:42.274145 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 01:40:42.289468 ignition[935]: Ignition 2.19.0 Jan 20 01:40:42.289481 ignition[935]: Stage: disks Jan 20 01:40:42.289694 ignition[935]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:40:42.293588 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 01:40:42.289703 ignition[935]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:40:42.299269 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 01:40:42.290813 ignition[935]: disks: disks passed Jan 20 01:40:42.307300 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 01:40:42.290858 ignition[935]: Ignition finished successfully Jan 20 01:40:42.316399 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 01:40:42.324727 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 01:40:42.333311 systemd[1]: Reached target basic.target - Basic System. Jan 20 01:40:42.356173 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 01:40:42.427505 systemd-fsck[943]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 20 01:40:42.435966 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 01:40:42.449054 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 01:40:42.500928 kernel: EXT4-fs (sda9): mounted filesystem c6ba54f7-cbb1-463d-980b-a8c197f00e73 r/w with ordered data mode. Quota mode: none. Jan 20 01:40:42.501686 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 01:40:42.505674 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 01:40:42.545952 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 01:40:42.564914 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (954) Jan 20 01:40:42.566959 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 01:40:42.582593 kernel: BTRFS info (device sda6): first mount of filesystem a80e435f-767b-4927-acd1-02c9e9018349 Jan 20 01:40:42.582610 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 20 01:40:42.586104 kernel: BTRFS info (device sda6): using free space tree Jan 20 01:40:42.589406 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 20 01:40:42.608628 kernel: BTRFS info (device sda6): auto enabling async discard Jan 20 01:40:42.597872 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 01:40:42.597911 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 01:40:42.604687 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 01:40:42.612343 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 01:40:42.629087 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 01:40:42.684998 systemd-networkd[912]: eth0: Gained IPv6LL Jan 20 01:40:43.115098 coreos-metadata[969]: Jan 20 01:40:43.115 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 20 01:40:43.121372 coreos-metadata[969]: Jan 20 01:40:43.121 INFO Fetch successful Jan 20 01:40:43.121372 coreos-metadata[969]: Jan 20 01:40:43.121 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 20 01:40:43.134255 coreos-metadata[969]: Jan 20 01:40:43.134 INFO Fetch successful Jan 20 01:40:43.151644 coreos-metadata[969]: Jan 20 01:40:43.151 INFO wrote hostname ci-4081.3.6-n-0046389dc1 to /sysroot/etc/hostname Jan 20 01:40:43.159024 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 20 01:40:43.223068 initrd-setup-root[984]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 01:40:43.273851 initrd-setup-root[991]: cut: /sysroot/etc/group: No such file or directory Jan 20 01:40:43.298909 initrd-setup-root[998]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 01:40:43.306528 initrd-setup-root[1005]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 01:40:44.680422 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 01:40:44.700085 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 01:40:44.708042 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 01:40:44.725429 kernel: BTRFS info (device sda6): last unmount of filesystem a80e435f-767b-4927-acd1-02c9e9018349 Jan 20 01:40:44.720673 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 01:40:44.742158 ignition[1072]: INFO : Ignition 2.19.0 Jan 20 01:40:44.746075 ignition[1072]: INFO : Stage: mount Jan 20 01:40:44.746075 ignition[1072]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:40:44.746075 ignition[1072]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:40:44.746075 ignition[1072]: INFO : mount: mount passed Jan 20 01:40:44.746075 ignition[1072]: INFO : Ignition finished successfully Jan 20 01:40:44.746343 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 01:40:44.768052 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 01:40:44.777279 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 01:40:44.796117 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 01:40:44.816913 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1084) Jan 20 01:40:44.826517 kernel: BTRFS info (device sda6): first mount of filesystem a80e435f-767b-4927-acd1-02c9e9018349 Jan 20 01:40:44.826539 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 20 01:40:44.829736 kernel: BTRFS info (device sda6): using free space tree Jan 20 01:40:44.835912 kernel: BTRFS info (device sda6): auto enabling async discard Jan 20 01:40:44.837542 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 01:40:44.860736 ignition[1102]: INFO : Ignition 2.19.0 Jan 20 01:40:44.864535 ignition[1102]: INFO : Stage: files Jan 20 01:40:44.864535 ignition[1102]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:40:44.864535 ignition[1102]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:40:44.864535 ignition[1102]: DEBUG : files: compiled without relabeling support, skipping Jan 20 01:40:44.880854 ignition[1102]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 01:40:44.880854 ignition[1102]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 01:40:44.954156 ignition[1102]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 01:40:44.959859 ignition[1102]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 01:40:44.959859 ignition[1102]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 01:40:44.954571 unknown[1102]: wrote ssh authorized keys file for user: core Jan 20 01:40:44.974610 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 20 01:40:44.974610 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 20 01:40:45.043743 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 01:40:45.189247 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 20 01:40:45.197769 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 01:40:45.197769 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 01:40:45.197769 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 01:40:45.197769 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 01:40:45.197769 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 01:40:45.197769 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 01:40:45.197769 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 01:40:45.197769 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 01:40:45.197769 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 01:40:45.197769 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 01:40:45.197769 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 20 01:40:45.197769 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 20 01:40:45.197769 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 20 01:40:45.197769 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 20 01:40:45.761764 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 01:40:46.189950 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 20 01:40:46.189950 ignition[1102]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 01:40:46.219430 ignition[1102]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 01:40:46.228187 ignition[1102]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 01:40:46.228187 ignition[1102]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 01:40:46.228187 ignition[1102]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 20 01:40:46.228187 ignition[1102]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 01:40:46.228187 ignition[1102]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 01:40:46.228187 ignition[1102]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 01:40:46.228187 ignition[1102]: INFO : files: files passed Jan 20 01:40:46.228187 ignition[1102]: INFO : Ignition finished successfully Jan 20 01:40:46.228543 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 01:40:46.257155 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 01:40:46.271193 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 01:40:46.285234 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 01:40:46.285347 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 01:40:46.318146 initrd-setup-root-after-ignition[1133]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:40:46.314164 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 01:40:46.339662 initrd-setup-root-after-ignition[1129]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:40:46.339662 initrd-setup-root-after-ignition[1129]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:40:46.323602 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 01:40:46.354054 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 01:40:46.390729 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 01:40:46.392165 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 01:40:46.400511 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 01:40:46.409711 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 01:40:46.418254 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 01:40:46.421077 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 01:40:46.448123 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 01:40:46.461196 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 01:40:46.480014 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:40:46.485258 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:40:46.495015 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 01:40:46.503894 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 01:40:46.504031 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 01:40:46.519319 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 01:40:46.523854 systemd[1]: Stopped target basic.target - Basic System. Jan 20 01:40:46.532503 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 01:40:46.541100 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 01:40:46.549534 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 01:40:46.558764 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 01:40:46.567518 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 01:40:46.578015 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 01:40:46.586872 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 01:40:46.596098 systemd[1]: Stopped target swap.target - Swaps. Jan 20 01:40:46.603530 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 01:40:46.603652 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 01:40:46.614838 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:40:46.619536 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:40:46.628420 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 01:40:46.632335 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:40:46.637945 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 01:40:46.638060 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 01:40:46.651429 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 01:40:46.651540 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 01:40:46.656891 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 01:40:46.656990 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 01:40:46.664853 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 20 01:40:46.720550 ignition[1153]: INFO : Ignition 2.19.0 Jan 20 01:40:46.720550 ignition[1153]: INFO : Stage: umount Jan 20 01:40:46.720550 ignition[1153]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:40:46.720550 ignition[1153]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:40:46.664950 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 20 01:40:46.746365 ignition[1153]: INFO : umount: umount passed Jan 20 01:40:46.746365 ignition[1153]: INFO : Ignition finished successfully Jan 20 01:40:46.694118 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 01:40:46.706870 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 01:40:46.707020 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:40:46.732114 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 01:40:46.742543 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 01:40:46.742678 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:40:46.756957 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 01:40:46.757067 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 01:40:46.771117 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 01:40:46.771749 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 01:40:46.771834 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 01:40:46.778681 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 01:40:46.778930 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 01:40:46.787962 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 01:40:46.788013 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 01:40:46.796636 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 20 01:40:46.796680 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 20 01:40:46.805141 systemd[1]: Stopped target network.target - Network. Jan 20 01:40:46.813332 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 01:40:46.813376 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 01:40:46.822561 systemd[1]: Stopped target paths.target - Path Units. Jan 20 01:40:46.830513 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 01:40:46.834044 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:40:46.839324 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 01:40:46.843036 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 01:40:46.850599 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 01:40:46.850641 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 01:40:46.855252 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 01:40:46.855290 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 01:40:46.863212 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 01:40:46.863266 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 01:40:46.867355 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 01:40:46.867395 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 01:40:46.875569 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 01:40:46.888797 systemd-networkd[912]: eth0: DHCPv6 lease lost Jan 20 01:40:46.889998 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 01:40:46.898279 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 01:40:46.899984 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 01:40:46.908638 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 01:40:47.082006 kernel: hv_netvsc 7ced8d87-9f2e-7ced-8d87-9f2e7ced8d87 eth0: Data path switched from VF: enP63125s1 Jan 20 01:40:46.908752 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 01:40:46.918656 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 01:40:46.919934 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 01:40:46.930254 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 01:40:46.930331 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:40:46.952119 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 01:40:46.961477 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 01:40:46.961551 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 01:40:46.970832 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 01:40:46.970885 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:40:46.978883 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 01:40:46.978933 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 01:40:46.986983 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 01:40:46.987018 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:40:46.999989 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:40:47.030625 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 01:40:47.030856 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:40:47.040430 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 01:40:47.040481 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 01:40:47.052433 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 01:40:47.052473 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:40:47.061886 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 01:40:47.061948 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 01:40:47.081389 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 01:40:47.081446 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 01:40:47.091514 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 01:40:47.091565 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:40:47.119109 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 01:40:47.131766 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 01:40:47.131840 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:40:47.142961 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:40:47.143010 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:40:47.151854 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 01:40:47.151972 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 01:40:47.164363 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 01:40:47.165926 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 01:40:47.173310 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 01:40:47.173392 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 01:40:47.183675 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 01:40:47.192579 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 01:40:47.192659 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 01:40:47.216108 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 01:40:47.231204 systemd[1]: Switching root. Jan 20 01:40:47.571404 systemd-journald[218]: Journal stopped Jan 20 01:40:37.169894 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 20 01:40:37.169916 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Jan 19 23:25:42 -00 2026 Jan 20 01:40:37.169924 kernel: KASLR enabled Jan 20 01:40:37.169930 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 20 01:40:37.169937 kernel: printk: bootconsole [pl11] enabled Jan 20 01:40:37.169943 kernel: efi: EFI v2.7 by EDK II Jan 20 01:40:37.169950 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 20 01:40:37.169956 kernel: random: crng init done Jan 20 01:40:37.169962 kernel: ACPI: Early table checksum verification disabled Jan 20 01:40:37.169968 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 20 01:40:37.169975 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:37.169980 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:37.169988 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 20 01:40:37.169994 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:37.170002 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:37.170008 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:37.170014 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:37.170022 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:37.170028 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:37.170035 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 20 01:40:37.170041 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 20 01:40:37.170048 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 20 01:40:37.170054 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 20 01:40:37.170060 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 20 01:40:37.170067 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 20 01:40:37.170073 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 20 01:40:37.170079 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 20 01:40:37.170086 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 20 01:40:37.170094 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 20 01:40:37.170100 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 20 01:40:37.170106 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 20 01:40:37.170113 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 20 01:40:37.170122 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 20 01:40:37.170128 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 20 01:40:37.170135 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 20 01:40:37.170148 kernel: Zone ranges: Jan 20 01:40:37.170154 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 20 01:40:37.170160 kernel: DMA32 empty Jan 20 01:40:37.170167 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 20 01:40:37.170173 kernel: Movable zone start for each node Jan 20 01:40:37.170184 kernel: Early memory node ranges Jan 20 01:40:37.170191 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 20 01:40:37.170198 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 20 01:40:37.170204 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 20 01:40:37.170211 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 20 01:40:37.170219 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 20 01:40:37.170226 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 20 01:40:37.170233 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 20 01:40:37.170240 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 20 01:40:37.170247 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 20 01:40:37.170253 kernel: psci: probing for conduit method from ACPI. Jan 20 01:40:37.170260 kernel: psci: PSCIv1.1 detected in firmware. Jan 20 01:40:37.170267 kernel: psci: Using standard PSCI v0.2 function IDs Jan 20 01:40:37.170273 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 20 01:40:37.170280 kernel: psci: SMC Calling Convention v1.4 Jan 20 01:40:37.170287 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 20 01:40:37.170294 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 20 01:40:37.170302 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 20 01:40:37.170308 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 20 01:40:37.170315 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 20 01:40:37.170322 kernel: Detected PIPT I-cache on CPU0 Jan 20 01:40:37.170329 kernel: CPU features: detected: GIC system register CPU interface Jan 20 01:40:37.170335 kernel: CPU features: detected: Hardware dirty bit management Jan 20 01:40:37.170342 kernel: CPU features: detected: Spectre-BHB Jan 20 01:40:37.170349 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 20 01:40:37.170355 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 20 01:40:37.170362 kernel: CPU features: detected: ARM erratum 1418040 Jan 20 01:40:37.170369 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 20 01:40:37.170377 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 20 01:40:37.170383 kernel: alternatives: applying boot alternatives Jan 20 01:40:37.170392 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=93b7c0065a09ec71bf84c247be021b0de512ae4ddd93f3ff0c2b7b260332752d Jan 20 01:40:37.170399 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 01:40:37.170406 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 01:40:37.170412 kernel: Fallback order for Node 0: 0 Jan 20 01:40:37.170419 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 20 01:40:37.170426 kernel: Policy zone: Normal Jan 20 01:40:37.170433 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 01:40:37.170439 kernel: software IO TLB: area num 2. Jan 20 01:40:37.170446 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 20 01:40:37.170455 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Jan 20 01:40:37.170462 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 20 01:40:37.170469 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 01:40:37.170476 kernel: rcu: RCU event tracing is enabled. Jan 20 01:40:37.170483 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 20 01:40:37.170490 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 01:40:37.170497 kernel: Tracing variant of Tasks RCU enabled. Jan 20 01:40:37.170504 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 01:40:37.170511 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 20 01:40:37.170517 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 20 01:40:37.170524 kernel: GICv3: 960 SPIs implemented Jan 20 01:40:37.170532 kernel: GICv3: 0 Extended SPIs implemented Jan 20 01:40:37.170539 kernel: Root IRQ handler: gic_handle_irq Jan 20 01:40:37.170546 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 20 01:40:37.170552 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 20 01:40:37.170559 kernel: ITS: No ITS available, not enabling LPIs Jan 20 01:40:37.170566 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 01:40:37.170573 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 20 01:40:37.170579 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 20 01:40:37.170586 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 20 01:40:37.170593 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 20 01:40:37.170600 kernel: Console: colour dummy device 80x25 Jan 20 01:40:37.170609 kernel: printk: console [tty1] enabled Jan 20 01:40:37.170616 kernel: ACPI: Core revision 20230628 Jan 20 01:40:37.170623 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 20 01:40:37.170630 kernel: pid_max: default: 32768 minimum: 301 Jan 20 01:40:37.170637 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 20 01:40:37.170644 kernel: landlock: Up and running. Jan 20 01:40:37.170651 kernel: SELinux: Initializing. Jan 20 01:40:37.170658 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 01:40:37.170665 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 01:40:37.170673 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 01:40:37.170681 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 01:40:37.170688 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Jan 20 01:40:37.170694 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 20 01:40:37.170701 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 20 01:40:37.170708 kernel: rcu: Hierarchical SRCU implementation. Jan 20 01:40:37.170715 kernel: rcu: Max phase no-delay instances is 400. Jan 20 01:40:37.170722 kernel: Remapping and enabling EFI services. Jan 20 01:40:37.170735 kernel: smp: Bringing up secondary CPUs ... Jan 20 01:40:37.170742 kernel: Detected PIPT I-cache on CPU1 Jan 20 01:40:37.170749 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 20 01:40:37.170757 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 20 01:40:37.170766 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 20 01:40:37.170773 kernel: smp: Brought up 1 node, 2 CPUs Jan 20 01:40:37.170781 kernel: SMP: Total of 2 processors activated. Jan 20 01:40:37.170788 kernel: CPU features: detected: 32-bit EL0 Support Jan 20 01:40:37.170796 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 20 01:40:37.170805 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 20 01:40:37.170812 kernel: CPU features: detected: CRC32 instructions Jan 20 01:40:37.170819 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 20 01:40:37.170826 kernel: CPU features: detected: LSE atomic instructions Jan 20 01:40:37.170834 kernel: CPU features: detected: Privileged Access Never Jan 20 01:40:37.170841 kernel: CPU: All CPU(s) started at EL1 Jan 20 01:40:37.170848 kernel: alternatives: applying system-wide alternatives Jan 20 01:40:37.170855 kernel: devtmpfs: initialized Jan 20 01:40:37.170862 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 01:40:37.170871 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 20 01:40:37.170878 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 01:40:37.170885 kernel: SMBIOS 3.1.0 present. Jan 20 01:40:37.170893 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 20 01:40:37.170900 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 01:40:37.170908 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 20 01:40:37.170915 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 20 01:40:37.170922 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 20 01:40:37.170930 kernel: audit: initializing netlink subsys (disabled) Jan 20 01:40:37.170938 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 20 01:40:37.170946 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 01:40:37.170953 kernel: cpuidle: using governor menu Jan 20 01:40:37.170960 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 20 01:40:37.170967 kernel: ASID allocator initialised with 32768 entries Jan 20 01:40:37.170975 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 01:40:37.170982 kernel: Serial: AMBA PL011 UART driver Jan 20 01:40:37.170989 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 20 01:40:37.170997 kernel: Modules: 0 pages in range for non-PLT usage Jan 20 01:40:37.171005 kernel: Modules: 509008 pages in range for PLT usage Jan 20 01:40:37.171013 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 01:40:37.171020 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 01:40:37.171027 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 20 01:40:37.171035 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 20 01:40:37.171042 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 01:40:37.171049 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 01:40:37.171056 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 20 01:40:37.171064 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 20 01:40:37.171072 kernel: ACPI: Added _OSI(Module Device) Jan 20 01:40:37.171080 kernel: ACPI: Added _OSI(Processor Device) Jan 20 01:40:37.171087 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 01:40:37.171094 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 01:40:37.171101 kernel: ACPI: Interpreter enabled Jan 20 01:40:37.171108 kernel: ACPI: Using GIC for interrupt routing Jan 20 01:40:37.171116 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 20 01:40:37.171123 kernel: printk: console [ttyAMA0] enabled Jan 20 01:40:37.171130 kernel: printk: bootconsole [pl11] disabled Jan 20 01:40:37.173166 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 20 01:40:37.173180 kernel: iommu: Default domain type: Translated Jan 20 01:40:37.173188 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 20 01:40:37.173195 kernel: efivars: Registered efivars operations Jan 20 01:40:37.173202 kernel: vgaarb: loaded Jan 20 01:40:37.173210 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 20 01:40:37.173217 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 01:40:37.173225 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 01:40:37.173233 kernel: pnp: PnP ACPI init Jan 20 01:40:37.173245 kernel: pnp: PnP ACPI: found 0 devices Jan 20 01:40:37.173253 kernel: NET: Registered PF_INET protocol family Jan 20 01:40:37.173260 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 01:40:37.173268 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 01:40:37.173275 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 01:40:37.173283 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 01:40:37.173290 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 01:40:37.173297 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 01:40:37.173305 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 01:40:37.173314 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 01:40:37.173321 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 01:40:37.173328 kernel: PCI: CLS 0 bytes, default 64 Jan 20 01:40:37.173336 kernel: kvm [1]: HYP mode not available Jan 20 01:40:37.173343 kernel: Initialise system trusted keyrings Jan 20 01:40:37.173350 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 01:40:37.173357 kernel: Key type asymmetric registered Jan 20 01:40:37.173364 kernel: Asymmetric key parser 'x509' registered Jan 20 01:40:37.173372 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 20 01:40:37.173380 kernel: io scheduler mq-deadline registered Jan 20 01:40:37.173388 kernel: io scheduler kyber registered Jan 20 01:40:37.173395 kernel: io scheduler bfq registered Jan 20 01:40:37.173402 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 01:40:37.173410 kernel: thunder_xcv, ver 1.0 Jan 20 01:40:37.173417 kernel: thunder_bgx, ver 1.0 Jan 20 01:40:37.173424 kernel: nicpf, ver 1.0 Jan 20 01:40:37.173431 kernel: nicvf, ver 1.0 Jan 20 01:40:37.173573 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 20 01:40:37.173650 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-20T01:40:36 UTC (1768873236) Jan 20 01:40:37.173660 kernel: efifb: probing for efifb Jan 20 01:40:37.173668 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 20 01:40:37.173675 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 20 01:40:37.173682 kernel: efifb: scrolling: redraw Jan 20 01:40:37.173690 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 20 01:40:37.173697 kernel: Console: switching to colour frame buffer device 128x48 Jan 20 01:40:37.173704 kernel: fb0: EFI VGA frame buffer device Jan 20 01:40:37.173714 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 20 01:40:37.173722 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 20 01:40:37.173730 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Jan 20 01:40:37.173737 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 20 01:40:37.173744 kernel: watchdog: Hard watchdog permanently disabled Jan 20 01:40:37.173752 kernel: NET: Registered PF_INET6 protocol family Jan 20 01:40:37.173759 kernel: Segment Routing with IPv6 Jan 20 01:40:37.173766 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 01:40:37.173773 kernel: NET: Registered PF_PACKET protocol family Jan 20 01:40:37.173782 kernel: Key type dns_resolver registered Jan 20 01:40:37.173789 kernel: registered taskstats version 1 Jan 20 01:40:37.173796 kernel: Loading compiled-in X.509 certificates Jan 20 01:40:37.173804 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 78d001f5b2e422df1e406698b80c7183ecdd19cf' Jan 20 01:40:37.173811 kernel: Key type .fscrypt registered Jan 20 01:40:37.173818 kernel: Key type fscrypt-provisioning registered Jan 20 01:40:37.173825 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 01:40:37.173833 kernel: ima: Allocated hash algorithm: sha1 Jan 20 01:40:37.173840 kernel: ima: No architecture policies found Jan 20 01:40:37.173849 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 20 01:40:37.173856 kernel: clk: Disabling unused clocks Jan 20 01:40:37.173863 kernel: Freeing unused kernel memory: 39424K Jan 20 01:40:37.173871 kernel: Run /init as init process Jan 20 01:40:37.173878 kernel: with arguments: Jan 20 01:40:37.173885 kernel: /init Jan 20 01:40:37.173892 kernel: with environment: Jan 20 01:40:37.173899 kernel: HOME=/ Jan 20 01:40:37.173906 kernel: TERM=linux Jan 20 01:40:37.173916 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 20 01:40:37.173927 systemd[1]: Detected virtualization microsoft. Jan 20 01:40:37.173934 systemd[1]: Detected architecture arm64. Jan 20 01:40:37.173942 systemd[1]: Running in initrd. Jan 20 01:40:37.173949 systemd[1]: No hostname configured, using default hostname. Jan 20 01:40:37.173957 systemd[1]: Hostname set to . Jan 20 01:40:37.173965 systemd[1]: Initializing machine ID from random generator. Jan 20 01:40:37.173975 systemd[1]: Queued start job for default target initrd.target. Jan 20 01:40:37.173983 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:40:37.173991 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:40:37.173999 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 01:40:37.174008 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 01:40:37.174016 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 01:40:37.174024 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 01:40:37.174034 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 20 01:40:37.174043 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 20 01:40:37.174051 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:40:37.174059 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:40:37.174067 systemd[1]: Reached target paths.target - Path Units. Jan 20 01:40:37.174075 systemd[1]: Reached target slices.target - Slice Units. Jan 20 01:40:37.174083 systemd[1]: Reached target swap.target - Swaps. Jan 20 01:40:37.174090 systemd[1]: Reached target timers.target - Timer Units. Jan 20 01:40:37.174099 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 01:40:37.174108 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 01:40:37.174116 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 01:40:37.174124 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 20 01:40:37.174132 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:40:37.174151 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 01:40:37.174160 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:40:37.174168 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 01:40:37.174176 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 01:40:37.174187 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 01:40:37.174195 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 01:40:37.174202 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 01:40:37.174210 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 01:40:37.174218 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 01:40:37.174244 systemd-journald[218]: Collecting audit messages is disabled. Jan 20 01:40:37.174265 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:40:37.174273 systemd-journald[218]: Journal started Jan 20 01:40:37.174291 systemd-journald[218]: Runtime Journal (/run/log/journal/f852d12ffe904b619fa0ebaddb51ef5b) is 8.0M, max 78.5M, 70.5M free. Jan 20 01:40:37.175552 systemd-modules-load[219]: Inserted module 'overlay' Jan 20 01:40:37.199198 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 01:40:37.199221 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 01:40:37.203070 systemd-modules-load[219]: Inserted module 'br_netfilter' Jan 20 01:40:37.208370 kernel: Bridge firewalling registered Jan 20 01:40:37.206938 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 01:40:37.217338 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:40:37.222949 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 01:40:37.231206 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 01:40:37.238802 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:40:37.257331 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 01:40:37.263486 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 01:40:37.282905 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 01:40:37.297242 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 01:40:37.310158 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:40:37.321233 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:40:37.325972 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 01:40:37.335511 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:40:37.355322 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 01:40:37.365289 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 01:40:37.378229 dracut-cmdline[252]: dracut-dracut-053 Jan 20 01:40:37.391496 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=93b7c0065a09ec71bf84c247be021b0de512ae4ddd93f3ff0c2b7b260332752d Jan 20 01:40:37.382351 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 01:40:37.399354 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:40:37.438128 systemd-resolved[256]: Positive Trust Anchors: Jan 20 01:40:37.438164 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 01:40:37.438196 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 01:40:37.440958 systemd-resolved[256]: Defaulting to hostname 'linux'. Jan 20 01:40:37.447210 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 01:40:37.452370 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:40:37.540150 kernel: SCSI subsystem initialized Jan 20 01:40:37.546158 kernel: Loading iSCSI transport class v2.0-870. Jan 20 01:40:37.556162 kernel: iscsi: registered transport (tcp) Jan 20 01:40:37.572216 kernel: iscsi: registered transport (qla4xxx) Jan 20 01:40:37.572253 kernel: QLogic iSCSI HBA Driver Jan 20 01:40:37.610561 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 01:40:37.621392 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 01:40:37.653298 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 01:40:37.653351 kernel: device-mapper: uevent: version 1.0.3 Jan 20 01:40:37.658650 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 20 01:40:37.704159 kernel: raid6: neonx8 gen() 15817 MB/s Jan 20 01:40:37.724144 kernel: raid6: neonx4 gen() 15685 MB/s Jan 20 01:40:37.743142 kernel: raid6: neonx2 gen() 13278 MB/s Jan 20 01:40:37.762142 kernel: raid6: neonx1 gen() 10504 MB/s Jan 20 01:40:37.782143 kernel: raid6: int64x8 gen() 6978 MB/s Jan 20 01:40:37.801150 kernel: raid6: int64x4 gen() 7360 MB/s Jan 20 01:40:37.820152 kernel: raid6: int64x2 gen() 6147 MB/s Jan 20 01:40:37.842699 kernel: raid6: int64x1 gen() 5071 MB/s Jan 20 01:40:37.842753 kernel: raid6: using algorithm neonx8 gen() 15817 MB/s Jan 20 01:40:37.865085 kernel: raid6: .... xor() 12049 MB/s, rmw enabled Jan 20 01:40:37.865109 kernel: raid6: using neon recovery algorithm Jan 20 01:40:37.874871 kernel: xor: measuring software checksum speed Jan 20 01:40:37.874904 kernel: 8regs : 19793 MB/sec Jan 20 01:40:37.880673 kernel: 32regs : 19092 MB/sec Jan 20 01:40:37.880702 kernel: arm64_neon : 26866 MB/sec Jan 20 01:40:37.884067 kernel: xor: using function: arm64_neon (26866 MB/sec) Jan 20 01:40:37.934350 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 01:40:37.944545 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 01:40:37.957287 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:40:37.976793 systemd-udevd[438]: Using default interface naming scheme 'v255'. Jan 20 01:40:37.980010 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:40:38.000282 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 01:40:38.014400 dracut-pre-trigger[444]: rd.md=0: removing MD RAID activation Jan 20 01:40:38.041767 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 01:40:38.054278 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 01:40:38.089556 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:40:38.109712 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 01:40:38.128180 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 01:40:38.138877 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 01:40:38.148228 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:40:38.157418 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 01:40:38.173320 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 01:40:38.189965 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 01:40:38.220709 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 01:40:38.229382 kernel: hv_vmbus: Vmbus version:5.3 Jan 20 01:40:38.220845 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:40:38.250047 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 01:40:38.268245 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 20 01:40:38.268268 kernel: hv_vmbus: registering driver hv_netvsc Jan 20 01:40:38.268278 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 20 01:40:38.258064 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:40:38.283604 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 20 01:40:38.258246 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:40:38.297214 kernel: hv_vmbus: registering driver hv_storvsc Jan 20 01:40:38.297234 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 20 01:40:38.279589 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:40:38.316626 kernel: hv_vmbus: registering driver hid_hyperv Jan 20 01:40:38.316671 kernel: scsi host1: storvsc_host_t Jan 20 01:40:38.316992 kernel: PTP clock support registered Jan 20 01:40:38.313371 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:40:38.359931 kernel: scsi host0: storvsc_host_t Jan 20 01:40:38.360084 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 20 01:40:38.360117 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 20 01:40:38.360129 kernel: hv_netvsc 7ced8d87-9f2e-7ced-8d87-9f2e7ced8d87 eth0: VF slot 1 added Jan 20 01:40:38.360243 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 20 01:40:38.360330 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 20 01:40:38.349038 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:40:38.379025 kernel: hv_utils: Registering HyperV Utility Driver Jan 20 01:40:38.379044 kernel: hv_vmbus: registering driver hv_pci Jan 20 01:40:38.349153 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:40:38.390740 kernel: hv_pci 9b7f7be9-f695-47a8-a2f3-aaf9f19a1588: PCI VMBus probing: Using version 0x10004 Jan 20 01:40:38.391837 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:40:38.404857 kernel: hv_vmbus: registering driver hv_utils Jan 20 01:40:38.411153 kernel: hv_utils: Heartbeat IC version 3.0 Jan 20 01:40:38.411181 kernel: hv_utils: Shutdown IC version 3.2 Jan 20 01:40:38.416696 kernel: hv_pci 9b7f7be9-f695-47a8-a2f3-aaf9f19a1588: PCI host bridge to bus f695:00 Jan 20 01:40:38.416838 kernel: hv_utils: TimeSync IC version 4.0 Jan 20 01:40:38.478417 systemd-resolved[256]: Clock change detected. Flushing caches. Jan 20 01:40:38.486384 kernel: pci_bus f695:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 20 01:40:38.486535 kernel: pci_bus f695:00: No busn resource found for root bus, will use [bus 00-ff] Jan 20 01:40:38.490910 kernel: pci f695:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 20 01:40:38.510231 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 20 01:40:38.510407 kernel: pci f695:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 20 01:40:38.510433 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 01:40:38.510443 kernel: pci f695:00:02.0: enabling Extended Tags Jan 20 01:40:38.515196 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:40:38.544397 kernel: pci f695:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f695:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 20 01:40:38.544559 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 20 01:40:38.544665 kernel: pci_bus f695:00: busn_res: [bus 00-ff] end is updated to 00 Jan 20 01:40:38.544766 kernel: pci f695:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 20 01:40:38.548051 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 01:40:38.577910 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 20 01:40:38.578154 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 20 01:40:38.578283 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 20 01:40:38.590920 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 20 01:40:38.591109 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 20 01:40:38.591207 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 20 01:40:38.605244 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 20 01:40:38.605291 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 20 01:40:38.609219 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:40:38.631835 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#163 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 20 01:40:38.650352 kernel: mlx5_core f695:00:02.0: enabling device (0000 -> 0002) Jan 20 01:40:38.655908 kernel: mlx5_core f695:00:02.0: firmware version: 16.30.5026 Jan 20 01:40:38.849307 kernel: hv_netvsc 7ced8d87-9f2e-7ced-8d87-9f2e7ced8d87 eth0: VF registering: eth1 Jan 20 01:40:38.849520 kernel: mlx5_core f695:00:02.0 eth1: joined to eth0 Jan 20 01:40:38.855946 kernel: mlx5_core f695:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 20 01:40:38.864917 kernel: mlx5_core f695:00:02.0 enP63125s1: renamed from eth1 Jan 20 01:40:39.103730 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 20 01:40:39.120993 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (500) Jan 20 01:40:39.136761 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 20 01:40:39.171667 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 20 01:40:39.199915 kernel: BTRFS: device fsid ea3e8495-ec03-40ca-9b09-0f7e2a4e9620 devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (507) Jan 20 01:40:39.213457 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 20 01:40:39.219218 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 20 01:40:39.247101 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 01:40:39.265914 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 20 01:40:39.271918 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 20 01:40:40.284969 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 20 01:40:40.285631 disk-uuid[614]: The operation has completed successfully. Jan 20 01:40:40.349120 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 01:40:40.350917 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 01:40:40.386024 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 20 01:40:40.396342 sh[728]: Success Jan 20 01:40:40.426006 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 20 01:40:40.703526 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 20 01:40:40.709329 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 20 01:40:40.721009 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 20 01:40:40.748548 kernel: BTRFS info (device dm-0): first mount of filesystem ea3e8495-ec03-40ca-9b09-0f7e2a4e9620 Jan 20 01:40:40.748592 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 20 01:40:40.753840 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 20 01:40:40.757594 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 01:40:40.760781 kernel: BTRFS info (device dm-0): using free space tree Jan 20 01:40:41.089976 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 20 01:40:41.094675 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 01:40:41.118199 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 01:40:41.125055 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 01:40:41.156572 kernel: BTRFS info (device sda6): first mount of filesystem a80e435f-767b-4927-acd1-02c9e9018349 Jan 20 01:40:41.156612 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 20 01:40:41.159970 kernel: BTRFS info (device sda6): using free space tree Jan 20 01:40:41.199027 kernel: BTRFS info (device sda6): auto enabling async discard Jan 20 01:40:41.206735 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 20 01:40:41.215936 kernel: BTRFS info (device sda6): last unmount of filesystem a80e435f-767b-4927-acd1-02c9e9018349 Jan 20 01:40:41.221592 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 01:40:41.240089 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 01:40:41.246922 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 01:40:41.262292 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 01:40:41.292866 systemd-networkd[912]: lo: Link UP Jan 20 01:40:41.292876 systemd-networkd[912]: lo: Gained carrier Jan 20 01:40:41.294535 systemd-networkd[912]: Enumeration completed Jan 20 01:40:41.294613 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 01:40:41.300624 systemd[1]: Reached target network.target - Network. Jan 20 01:40:41.303748 systemd-networkd[912]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:40:41.303751 systemd-networkd[912]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 01:40:41.376915 kernel: mlx5_core f695:00:02.0 enP63125s1: Link up Jan 20 01:40:41.414042 kernel: hv_netvsc 7ced8d87-9f2e-7ced-8d87-9f2e7ced8d87 eth0: Data path switched to VF: enP63125s1 Jan 20 01:40:41.413718 systemd-networkd[912]: enP63125s1: Link UP Jan 20 01:40:41.413799 systemd-networkd[912]: eth0: Link UP Jan 20 01:40:41.413937 systemd-networkd[912]: eth0: Gained carrier Jan 20 01:40:41.413945 systemd-networkd[912]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:40:41.432090 systemd-networkd[912]: enP63125s1: Gained carrier Jan 20 01:40:41.441927 systemd-networkd[912]: eth0: DHCPv4 address 10.200.20.33/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 20 01:40:42.046399 ignition[907]: Ignition 2.19.0 Jan 20 01:40:42.046413 ignition[907]: Stage: fetch-offline Jan 20 01:40:42.046447 ignition[907]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:40:42.053525 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 01:40:42.046456 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:40:42.046556 ignition[907]: parsed url from cmdline: "" Jan 20 01:40:42.046559 ignition[907]: no config URL provided Jan 20 01:40:42.046564 ignition[907]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 01:40:42.072155 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 20 01:40:42.046574 ignition[907]: no config at "/usr/lib/ignition/user.ign" Jan 20 01:40:42.046580 ignition[907]: failed to fetch config: resource requires networking Jan 20 01:40:42.049831 ignition[907]: Ignition finished successfully Jan 20 01:40:42.090782 ignition[921]: Ignition 2.19.0 Jan 20 01:40:42.090787 ignition[921]: Stage: fetch Jan 20 01:40:42.091014 ignition[921]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:40:42.091023 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:40:42.091126 ignition[921]: parsed url from cmdline: "" Jan 20 01:40:42.091129 ignition[921]: no config URL provided Jan 20 01:40:42.091137 ignition[921]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 01:40:42.091150 ignition[921]: no config at "/usr/lib/ignition/user.ign" Jan 20 01:40:42.091169 ignition[921]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 20 01:40:42.198379 ignition[921]: GET result: OK Jan 20 01:40:42.198469 ignition[921]: config has been read from IMDS userdata Jan 20 01:40:42.198512 ignition[921]: parsing config with SHA512: 6cc937a13ec233d5bb00a81c220f829bd61f1ae980ab451c65de0b68d45d89ab022e3ebb9961d664fe729e7ebc53e7d7f393887b5620d66bdf3b24822b509e61 Jan 20 01:40:42.202364 unknown[921]: fetched base config from "system" Jan 20 01:40:42.202708 ignition[921]: fetch: fetch complete Jan 20 01:40:42.202371 unknown[921]: fetched base config from "system" Jan 20 01:40:42.202712 ignition[921]: fetch: fetch passed Jan 20 01:40:42.202376 unknown[921]: fetched user config from "azure" Jan 20 01:40:42.202753 ignition[921]: Ignition finished successfully Jan 20 01:40:42.206305 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 20 01:40:42.226021 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 01:40:42.244206 ignition[928]: Ignition 2.19.0 Jan 20 01:40:42.244216 ignition[928]: Stage: kargs Jan 20 01:40:42.244417 ignition[928]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:40:42.250576 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 01:40:42.244427 ignition[928]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:40:42.245631 ignition[928]: kargs: kargs passed Jan 20 01:40:42.245672 ignition[928]: Ignition finished successfully Jan 20 01:40:42.274145 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 01:40:42.289468 ignition[935]: Ignition 2.19.0 Jan 20 01:40:42.289481 ignition[935]: Stage: disks Jan 20 01:40:42.289694 ignition[935]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:40:42.293588 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 01:40:42.289703 ignition[935]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:40:42.299269 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 01:40:42.290813 ignition[935]: disks: disks passed Jan 20 01:40:42.307300 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 01:40:42.290858 ignition[935]: Ignition finished successfully Jan 20 01:40:42.316399 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 01:40:42.324727 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 01:40:42.333311 systemd[1]: Reached target basic.target - Basic System. Jan 20 01:40:42.356173 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 01:40:42.427505 systemd-fsck[943]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 20 01:40:42.435966 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 01:40:42.449054 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 01:40:42.500928 kernel: EXT4-fs (sda9): mounted filesystem c6ba54f7-cbb1-463d-980b-a8c197f00e73 r/w with ordered data mode. Quota mode: none. Jan 20 01:40:42.501686 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 01:40:42.505674 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 01:40:42.545952 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 01:40:42.564914 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (954) Jan 20 01:40:42.566959 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 01:40:42.582593 kernel: BTRFS info (device sda6): first mount of filesystem a80e435f-767b-4927-acd1-02c9e9018349 Jan 20 01:40:42.582610 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 20 01:40:42.586104 kernel: BTRFS info (device sda6): using free space tree Jan 20 01:40:42.589406 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 20 01:40:42.608628 kernel: BTRFS info (device sda6): auto enabling async discard Jan 20 01:40:42.597872 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 01:40:42.597911 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 01:40:42.604687 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 01:40:42.612343 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 01:40:42.629087 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 01:40:42.684998 systemd-networkd[912]: eth0: Gained IPv6LL Jan 20 01:40:43.115098 coreos-metadata[969]: Jan 20 01:40:43.115 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 20 01:40:43.121372 coreos-metadata[969]: Jan 20 01:40:43.121 INFO Fetch successful Jan 20 01:40:43.121372 coreos-metadata[969]: Jan 20 01:40:43.121 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 20 01:40:43.134255 coreos-metadata[969]: Jan 20 01:40:43.134 INFO Fetch successful Jan 20 01:40:43.151644 coreos-metadata[969]: Jan 20 01:40:43.151 INFO wrote hostname ci-4081.3.6-n-0046389dc1 to /sysroot/etc/hostname Jan 20 01:40:43.159024 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 20 01:40:43.223068 initrd-setup-root[984]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 01:40:43.273851 initrd-setup-root[991]: cut: /sysroot/etc/group: No such file or directory Jan 20 01:40:43.298909 initrd-setup-root[998]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 01:40:43.306528 initrd-setup-root[1005]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 01:40:44.680422 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 01:40:44.700085 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 01:40:44.708042 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 01:40:44.725429 kernel: BTRFS info (device sda6): last unmount of filesystem a80e435f-767b-4927-acd1-02c9e9018349 Jan 20 01:40:44.720673 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 01:40:44.742158 ignition[1072]: INFO : Ignition 2.19.0 Jan 20 01:40:44.746075 ignition[1072]: INFO : Stage: mount Jan 20 01:40:44.746075 ignition[1072]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:40:44.746075 ignition[1072]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:40:44.746075 ignition[1072]: INFO : mount: mount passed Jan 20 01:40:44.746075 ignition[1072]: INFO : Ignition finished successfully Jan 20 01:40:44.746343 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 01:40:44.768052 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 01:40:44.777279 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 01:40:44.796117 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 01:40:44.816913 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1084) Jan 20 01:40:44.826517 kernel: BTRFS info (device sda6): first mount of filesystem a80e435f-767b-4927-acd1-02c9e9018349 Jan 20 01:40:44.826539 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 20 01:40:44.829736 kernel: BTRFS info (device sda6): using free space tree Jan 20 01:40:44.835912 kernel: BTRFS info (device sda6): auto enabling async discard Jan 20 01:40:44.837542 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 01:40:44.860736 ignition[1102]: INFO : Ignition 2.19.0 Jan 20 01:40:44.864535 ignition[1102]: INFO : Stage: files Jan 20 01:40:44.864535 ignition[1102]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:40:44.864535 ignition[1102]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:40:44.864535 ignition[1102]: DEBUG : files: compiled without relabeling support, skipping Jan 20 01:40:44.880854 ignition[1102]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 01:40:44.880854 ignition[1102]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 01:40:44.954156 ignition[1102]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 01:40:44.959859 ignition[1102]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 01:40:44.959859 ignition[1102]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 01:40:44.954571 unknown[1102]: wrote ssh authorized keys file for user: core Jan 20 01:40:44.974610 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 20 01:40:44.974610 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 20 01:40:45.043743 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 01:40:45.189247 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 20 01:40:45.197769 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 01:40:45.197769 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 01:40:45.197769 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 01:40:45.197769 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 01:40:45.197769 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 01:40:45.197769 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 01:40:45.197769 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 01:40:45.197769 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 01:40:45.197769 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 01:40:45.197769 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 01:40:45.197769 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 20 01:40:45.197769 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 20 01:40:45.197769 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 20 01:40:45.197769 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 20 01:40:45.761764 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 01:40:46.189950 ignition[1102]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 20 01:40:46.189950 ignition[1102]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 01:40:46.219430 ignition[1102]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 01:40:46.228187 ignition[1102]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 01:40:46.228187 ignition[1102]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 01:40:46.228187 ignition[1102]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 20 01:40:46.228187 ignition[1102]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 01:40:46.228187 ignition[1102]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 01:40:46.228187 ignition[1102]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 01:40:46.228187 ignition[1102]: INFO : files: files passed Jan 20 01:40:46.228187 ignition[1102]: INFO : Ignition finished successfully Jan 20 01:40:46.228543 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 01:40:46.257155 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 01:40:46.271193 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 01:40:46.285234 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 01:40:46.285347 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 01:40:46.318146 initrd-setup-root-after-ignition[1133]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:40:46.314164 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 01:40:46.339662 initrd-setup-root-after-ignition[1129]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:40:46.339662 initrd-setup-root-after-ignition[1129]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:40:46.323602 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 01:40:46.354054 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 01:40:46.390729 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 01:40:46.392165 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 01:40:46.400511 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 01:40:46.409711 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 01:40:46.418254 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 01:40:46.421077 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 01:40:46.448123 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 01:40:46.461196 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 01:40:46.480014 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:40:46.485258 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:40:46.495015 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 01:40:46.503894 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 01:40:46.504031 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 01:40:46.519319 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 01:40:46.523854 systemd[1]: Stopped target basic.target - Basic System. Jan 20 01:40:46.532503 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 01:40:46.541100 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 01:40:46.549534 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 01:40:46.558764 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 01:40:46.567518 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 01:40:46.578015 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 01:40:46.586872 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 01:40:46.596098 systemd[1]: Stopped target swap.target - Swaps. Jan 20 01:40:46.603530 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 01:40:46.603652 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 01:40:46.614838 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:40:46.619536 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:40:46.628420 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 01:40:46.632335 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:40:46.637945 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 01:40:46.638060 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 01:40:46.651429 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 01:40:46.651540 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 01:40:46.656891 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 01:40:46.656990 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 01:40:46.664853 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 20 01:40:46.720550 ignition[1153]: INFO : Ignition 2.19.0 Jan 20 01:40:46.720550 ignition[1153]: INFO : Stage: umount Jan 20 01:40:46.720550 ignition[1153]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:40:46.720550 ignition[1153]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 20 01:40:46.664950 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 20 01:40:46.746365 ignition[1153]: INFO : umount: umount passed Jan 20 01:40:46.746365 ignition[1153]: INFO : Ignition finished successfully Jan 20 01:40:46.694118 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 01:40:46.706870 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 01:40:46.707020 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:40:46.732114 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 01:40:46.742543 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 01:40:46.742678 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:40:46.756957 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 01:40:46.757067 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 01:40:46.771117 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 01:40:46.771749 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 01:40:46.771834 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 01:40:46.778681 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 01:40:46.778930 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 01:40:46.787962 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 01:40:46.788013 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 01:40:46.796636 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 20 01:40:46.796680 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 20 01:40:46.805141 systemd[1]: Stopped target network.target - Network. Jan 20 01:40:46.813332 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 01:40:46.813376 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 01:40:46.822561 systemd[1]: Stopped target paths.target - Path Units. Jan 20 01:40:46.830513 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 01:40:46.834044 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:40:46.839324 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 01:40:46.843036 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 01:40:46.850599 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 01:40:46.850641 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 01:40:46.855252 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 01:40:46.855290 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 01:40:46.863212 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 01:40:46.863266 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 01:40:46.867355 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 01:40:46.867395 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 01:40:46.875569 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 01:40:46.888797 systemd-networkd[912]: eth0: DHCPv6 lease lost Jan 20 01:40:46.889998 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 01:40:46.898279 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 01:40:46.899984 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 01:40:46.908638 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 01:40:47.082006 kernel: hv_netvsc 7ced8d87-9f2e-7ced-8d87-9f2e7ced8d87 eth0: Data path switched from VF: enP63125s1 Jan 20 01:40:46.908752 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 01:40:46.918656 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 01:40:46.919934 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 01:40:46.930254 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 01:40:46.930331 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:40:46.952119 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 01:40:46.961477 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 01:40:46.961551 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 01:40:46.970832 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 01:40:46.970885 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:40:46.978883 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 01:40:46.978933 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 01:40:46.986983 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 01:40:46.987018 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:40:46.999989 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:40:47.030625 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 01:40:47.030856 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:40:47.040430 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 01:40:47.040481 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 01:40:47.052433 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 01:40:47.052473 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:40:47.061886 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 01:40:47.061948 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 01:40:47.081389 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 01:40:47.081446 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 01:40:47.091514 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 01:40:47.091565 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:40:47.119109 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 01:40:47.131766 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 01:40:47.131840 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:40:47.142961 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:40:47.143010 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:40:47.151854 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 01:40:47.151972 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 01:40:47.164363 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 01:40:47.165926 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 01:40:47.173310 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 01:40:47.173392 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 01:40:47.183675 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 01:40:47.192579 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 01:40:47.192659 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 01:40:47.216108 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 01:40:47.231204 systemd[1]: Switching root. Jan 20 01:40:47.571404 systemd-journald[218]: Journal stopped Jan 20 01:40:52.347757 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Jan 20 01:40:52.347780 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 01:40:52.347791 kernel: SELinux: policy capability open_perms=1 Jan 20 01:40:52.347801 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 01:40:52.347810 kernel: SELinux: policy capability always_check_network=0 Jan 20 01:40:52.347818 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 01:40:52.347827 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 01:40:52.347836 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 01:40:52.347844 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 01:40:52.347853 kernel: audit: type=1403 audit(1768873248.943:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 01:40:52.347864 systemd[1]: Successfully loaded SELinux policy in 199.769ms. Jan 20 01:40:52.347874 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.952ms. Jan 20 01:40:52.347885 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 20 01:40:52.347903 systemd[1]: Detected virtualization microsoft. Jan 20 01:40:52.347916 systemd[1]: Detected architecture arm64. Jan 20 01:40:52.347927 systemd[1]: Detected first boot. Jan 20 01:40:52.347937 systemd[1]: Hostname set to . Jan 20 01:40:52.347947 systemd[1]: Initializing machine ID from random generator. Jan 20 01:40:52.347956 zram_generator::config[1194]: No configuration found. Jan 20 01:40:52.347966 systemd[1]: Populated /etc with preset unit settings. Jan 20 01:40:52.347976 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 01:40:52.347987 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 01:40:52.347997 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 01:40:52.348008 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 01:40:52.348018 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 01:40:52.348028 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 01:40:52.348038 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 01:40:52.348048 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 01:40:52.348059 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 01:40:52.348070 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 01:40:52.348079 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 01:40:52.348089 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:40:52.348099 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:40:52.348109 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 01:40:52.348120 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 01:40:52.348130 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 01:40:52.348141 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 01:40:52.348152 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 20 01:40:52.348162 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:40:52.348172 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 01:40:52.348184 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 01:40:52.348194 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 01:40:52.348204 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 01:40:52.348215 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:40:52.348226 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 01:40:52.348236 systemd[1]: Reached target slices.target - Slice Units. Jan 20 01:40:52.348246 systemd[1]: Reached target swap.target - Swaps. Jan 20 01:40:52.348256 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 01:40:52.348266 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 01:40:52.348276 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:40:52.348287 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 01:40:52.348299 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:40:52.348309 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 01:40:52.348320 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 01:40:52.348330 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 01:40:52.348341 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 01:40:52.348351 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 01:40:52.348363 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 01:40:52.348373 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 01:40:52.348384 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 01:40:52.348394 systemd[1]: Reached target machines.target - Containers. Jan 20 01:40:52.348405 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 01:40:52.348415 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:40:52.348426 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 01:40:52.348436 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 01:40:52.348448 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:40:52.348459 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 01:40:52.348469 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:40:52.348479 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 01:40:52.348490 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:40:52.348501 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 01:40:52.348511 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 01:40:52.348522 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 01:40:52.348532 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 01:40:52.348543 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 01:40:52.348554 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 01:40:52.348565 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 01:40:52.348575 kernel: fuse: init (API version 7.39) Jan 20 01:40:52.348584 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 01:40:52.348595 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 01:40:52.348605 kernel: ACPI: bus type drm_connector registered Jan 20 01:40:52.348614 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 01:40:52.348626 systemd[1]: verity-setup.service: Deactivated successfully. Jan 20 01:40:52.348636 systemd[1]: Stopped verity-setup.service. Jan 20 01:40:52.348646 kernel: loop: module loaded Jan 20 01:40:52.348655 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 01:40:52.348665 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 01:40:52.348688 systemd-journald[1287]: Collecting audit messages is disabled. Jan 20 01:40:52.348710 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 01:40:52.348721 systemd-journald[1287]: Journal started Jan 20 01:40:52.348741 systemd-journald[1287]: Runtime Journal (/run/log/journal/587459d9b0c6457ba2c1e99c4a8997c6) is 8.0M, max 78.5M, 70.5M free. Jan 20 01:40:51.439819 systemd[1]: Queued start job for default target multi-user.target. Jan 20 01:40:51.607252 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 20 01:40:51.607603 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 01:40:51.607918 systemd[1]: systemd-journald.service: Consumed 2.444s CPU time. Jan 20 01:40:52.362911 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 01:40:52.363335 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 01:40:52.367878 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 01:40:52.372526 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 01:40:52.376751 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 01:40:52.381735 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:40:52.387238 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 01:40:52.387365 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 01:40:52.392799 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:40:52.392929 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:40:52.397932 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 01:40:52.398046 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 01:40:52.402782 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:40:52.402914 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:40:52.408551 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 01:40:52.408670 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 01:40:52.413555 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:40:52.413665 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:40:52.418407 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 01:40:52.423364 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 01:40:52.428793 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 01:40:52.435919 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:40:52.448875 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 01:40:52.458964 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 01:40:52.466024 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 01:40:52.472676 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 01:40:52.472709 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 01:40:52.477809 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 20 01:40:52.483799 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 01:40:52.489580 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 01:40:52.493833 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:40:52.526050 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 01:40:52.531408 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 01:40:52.536290 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 01:40:52.537065 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 01:40:52.541612 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 01:40:52.544066 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 01:40:52.549436 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 01:40:52.556888 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 01:40:52.565060 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 20 01:40:52.572694 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 01:40:52.578228 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 01:40:52.583735 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 01:40:52.589427 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 01:40:52.599673 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 01:40:52.600919 kernel: loop0: detected capacity change from 0 to 114328 Jan 20 01:40:52.612321 systemd-journald[1287]: Time spent on flushing to /var/log/journal/587459d9b0c6457ba2c1e99c4a8997c6 is 15.983ms for 898 entries. Jan 20 01:40:52.612321 systemd-journald[1287]: System Journal (/var/log/journal/587459d9b0c6457ba2c1e99c4a8997c6) is 8.0M, max 2.6G, 2.6G free. Jan 20 01:40:52.712483 systemd-journald[1287]: Received client request to flush runtime journal. Jan 20 01:40:52.610183 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 20 01:40:52.621948 udevadm[1331]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 20 01:40:52.714600 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 01:40:52.726832 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:40:52.740090 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 01:40:52.741393 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 20 01:40:52.752034 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 01:40:52.764038 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 01:40:52.845231 systemd-tmpfiles[1345]: ACLs are not supported, ignoring. Jan 20 01:40:52.845246 systemd-tmpfiles[1345]: ACLs are not supported, ignoring. Jan 20 01:40:52.849454 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:40:53.071926 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 01:40:53.109963 kernel: loop1: detected capacity change from 0 to 114424 Jan 20 01:40:53.323885 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 01:40:53.335208 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:40:53.352349 systemd-udevd[1351]: Using default interface naming scheme 'v255'. Jan 20 01:40:53.485967 kernel: loop2: detected capacity change from 0 to 207008 Jan 20 01:40:53.487350 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:40:53.507171 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 01:40:53.536245 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 20 01:40:53.557914 kernel: loop3: detected capacity change from 0 to 31320 Jan 20 01:40:53.567153 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 01:40:53.622562 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 01:40:53.653420 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 01:40:53.653500 kernel: hv_vmbus: registering driver hv_balloon Jan 20 01:40:53.660861 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 20 01:40:53.664107 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 20 01:40:53.690283 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 20 01:40:53.707936 kernel: hv_vmbus: registering driver hyperv_fb Jan 20 01:40:53.717239 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 20 01:40:53.717300 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 20 01:40:53.722921 kernel: Console: switching to colour dummy device 80x25 Jan 20 01:40:53.724961 kernel: Console: switching to colour frame buffer device 128x48 Jan 20 01:40:53.733105 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:40:53.754594 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:40:53.754765 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:40:53.764814 systemd-networkd[1367]: lo: Link UP Jan 20 01:40:53.764821 systemd-networkd[1367]: lo: Gained carrier Jan 20 01:40:53.767948 systemd-networkd[1367]: Enumeration completed Jan 20 01:40:53.774538 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 01:40:53.779691 systemd-networkd[1367]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:40:53.780955 systemd-networkd[1367]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 01:40:53.798954 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1370) Jan 20 01:40:53.802383 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 01:40:53.812890 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:40:53.839690 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 20 01:40:53.840951 kernel: mlx5_core f695:00:02.0 enP63125s1: Link up Jan 20 01:40:53.850125 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 01:40:53.870997 kernel: hv_netvsc 7ced8d87-9f2e-7ced-8d87-9f2e7ced8d87 eth0: Data path switched to VF: enP63125s1 Jan 20 01:40:53.872949 systemd-networkd[1367]: enP63125s1: Link UP Jan 20 01:40:53.873050 systemd-networkd[1367]: eth0: Link UP Jan 20 01:40:53.873053 systemd-networkd[1367]: eth0: Gained carrier Jan 20 01:40:53.873066 systemd-networkd[1367]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:40:53.877121 systemd-networkd[1367]: enP63125s1: Gained carrier Jan 20 01:40:53.885925 systemd-networkd[1367]: eth0: DHCPv4 address 10.200.20.33/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 20 01:40:53.890234 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 01:40:53.947923 kernel: loop4: detected capacity change from 0 to 114328 Jan 20 01:40:53.961947 kernel: loop5: detected capacity change from 0 to 114424 Jan 20 01:40:53.975932 kernel: loop6: detected capacity change from 0 to 207008 Jan 20 01:40:53.991917 kernel: loop7: detected capacity change from 0 to 31320 Jan 20 01:40:53.998431 (sd-merge)[1446]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 20 01:40:53.998831 (sd-merge)[1446]: Merged extensions into '/usr'. Jan 20 01:40:54.001959 systemd[1]: Reloading requested from client PID 1328 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 01:40:54.001973 systemd[1]: Reloading... Jan 20 01:40:54.058942 zram_generator::config[1474]: No configuration found. Jan 20 01:40:54.199948 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 01:40:54.278400 systemd[1]: Reloading finished in 276 ms. Jan 20 01:40:54.312468 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 20 01:40:54.318223 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 01:40:54.323671 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:40:54.341032 systemd[1]: Starting ensure-sysext.service... Jan 20 01:40:54.345056 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 20 01:40:54.354386 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 01:40:54.361320 systemd[1]: Reloading requested from client PID 1536 ('systemctl') (unit ensure-sysext.service)... Jan 20 01:40:54.361333 systemd[1]: Reloading... Jan 20 01:40:54.374469 systemd-tmpfiles[1538]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 01:40:54.375782 systemd-tmpfiles[1538]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 01:40:54.376545 systemd-tmpfiles[1538]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 01:40:54.377288 systemd-tmpfiles[1538]: ACLs are not supported, ignoring. Jan 20 01:40:54.377417 systemd-tmpfiles[1538]: ACLs are not supported, ignoring. Jan 20 01:40:54.408802 systemd-tmpfiles[1538]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 01:40:54.408966 systemd-tmpfiles[1538]: Skipping /boot Jan 20 01:40:54.420860 systemd-tmpfiles[1538]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 01:40:54.421043 systemd-tmpfiles[1538]: Skipping /boot Jan 20 01:40:54.433463 lvm[1537]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 20 01:40:54.440986 zram_generator::config[1569]: No configuration found. Jan 20 01:40:54.547484 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 01:40:54.621416 systemd[1]: Reloading finished in 259 ms. Jan 20 01:40:54.643232 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 20 01:40:54.650119 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:40:54.661596 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:40:54.672150 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 20 01:40:54.699198 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 01:40:54.706144 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 20 01:40:54.715014 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 01:40:54.719719 lvm[1631]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 20 01:40:54.725379 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 01:40:54.734894 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 01:40:54.744891 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:40:54.747529 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:40:54.759657 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:40:54.766196 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:40:54.773241 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:40:54.774156 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 20 01:40:54.779971 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:40:54.780107 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:40:54.786188 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:40:54.786321 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:40:54.792188 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:40:54.792308 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:40:54.803879 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:40:54.809131 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:40:54.816164 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:40:54.823166 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:40:54.830235 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:40:54.831247 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:40:54.832186 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:40:54.842661 systemd-resolved[1638]: Positive Trust Anchors: Jan 20 01:40:54.843265 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 01:40:54.843589 systemd-resolved[1638]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 01:40:54.843692 systemd-resolved[1638]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 01:40:54.849826 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 01:40:54.855427 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:40:54.855556 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:40:54.861256 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:40:54.861387 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:40:54.871797 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:40:54.877602 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:40:54.884206 augenrules[1660]: No rules Jan 20 01:40:54.884998 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 01:40:54.891746 systemd-resolved[1638]: Using system hostname 'ci-4081.3.6-n-0046389dc1'. Jan 20 01:40:54.895507 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:40:54.902537 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:40:54.906786 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:40:54.906946 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 01:40:54.913044 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 01:40:54.918103 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 20 01:40:54.923143 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:40:54.923268 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:40:54.928344 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 01:40:54.928471 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 01:40:54.933279 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:40:54.933393 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:40:54.939353 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:40:54.939476 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:40:54.948969 systemd[1]: Finished ensure-sysext.service. Jan 20 01:40:54.954414 systemd[1]: Reached target network.target - Network. Jan 20 01:40:54.958308 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:40:54.963511 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 01:40:54.963572 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 01:40:55.037011 systemd-networkd[1367]: eth0: Gained IPv6LL Jan 20 01:40:55.039274 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 01:40:55.045296 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 01:40:55.257465 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 01:40:55.262982 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 01:40:58.491915 ldconfig[1323]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 01:40:58.502015 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 01:40:58.511083 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 01:40:58.523249 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 01:40:58.528131 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 01:40:58.532635 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 01:40:58.537552 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 01:40:58.542824 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 01:40:58.547279 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 01:40:58.552726 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 01:40:58.558111 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 01:40:58.558138 systemd[1]: Reached target paths.target - Path Units. Jan 20 01:40:58.562030 systemd[1]: Reached target timers.target - Timer Units. Jan 20 01:40:58.566428 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 01:40:58.572242 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 01:40:58.601519 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 01:40:58.606442 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 01:40:58.611321 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 01:40:58.615223 systemd[1]: Reached target basic.target - Basic System. Jan 20 01:40:58.619053 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 01:40:58.619075 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 01:40:58.628973 systemd[1]: Starting chronyd.service - NTP client/server... Jan 20 01:40:58.635002 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 01:40:58.651446 (chronyd)[1686]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 20 01:40:58.658137 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 20 01:40:58.665080 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 01:40:58.670591 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 01:40:58.673694 chronyd[1694]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 20 01:40:58.678148 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 01:40:58.682337 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 01:40:58.682373 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 20 01:40:58.682671 jq[1692]: false Jan 20 01:40:58.684184 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 20 01:40:58.692208 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 20 01:40:58.693516 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:40:58.695423 KVP[1696]: KVP starting; pid is:1696 Jan 20 01:40:58.700991 chronyd[1694]: Timezone right/UTC failed leap second check, ignoring Jan 20 01:40:58.701040 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 01:40:58.707392 chronyd[1694]: Loaded seccomp filter (level 2) Jan 20 01:40:58.714101 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 01:40:58.720071 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 01:40:58.728087 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 01:40:58.732923 extend-filesystems[1695]: Found loop4 Jan 20 01:40:58.732923 extend-filesystems[1695]: Found loop5 Jan 20 01:40:58.732923 extend-filesystems[1695]: Found loop6 Jan 20 01:40:58.732923 extend-filesystems[1695]: Found loop7 Jan 20 01:40:58.732923 extend-filesystems[1695]: Found sda Jan 20 01:40:58.732923 extend-filesystems[1695]: Found sda1 Jan 20 01:40:58.732923 extend-filesystems[1695]: Found sda2 Jan 20 01:40:58.732923 extend-filesystems[1695]: Found sda3 Jan 20 01:40:58.732923 extend-filesystems[1695]: Found usr Jan 20 01:40:58.732923 extend-filesystems[1695]: Found sda4 Jan 20 01:40:58.732923 extend-filesystems[1695]: Found sda6 Jan 20 01:40:58.732923 extend-filesystems[1695]: Found sda7 Jan 20 01:40:58.732923 extend-filesystems[1695]: Found sda9 Jan 20 01:40:58.732923 extend-filesystems[1695]: Checking size of /dev/sda9 Jan 20 01:40:58.889968 kernel: hv_utils: KVP IC version 4.0 Jan 20 01:40:58.890001 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1735) Jan 20 01:40:58.890042 coreos-metadata[1688]: Jan 20 01:40:58.879 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 20 01:40:58.890042 coreos-metadata[1688]: Jan 20 01:40:58.886 INFO Fetch successful Jan 20 01:40:58.890042 coreos-metadata[1688]: Jan 20 01:40:58.886 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 20 01:40:58.736088 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 01:40:58.747919 KVP[1696]: KVP LIC Version: 3.1 Jan 20 01:40:58.899515 extend-filesystems[1695]: Old size kept for /dev/sda9 Jan 20 01:40:58.899515 extend-filesystems[1695]: Found sr0 Jan 20 01:40:58.925109 coreos-metadata[1688]: Jan 20 01:40:58.890 INFO Fetch successful Jan 20 01:40:58.925109 coreos-metadata[1688]: Jan 20 01:40:58.890 INFO Fetching http://168.63.129.16/machine/fd017783-9248-4086-b4de-da9e31e808df/e32d84d7%2De021%2D4a44%2D9ce6%2D192aeeb49ec3.%5Fci%2D4081.3.6%2Dn%2D0046389dc1?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 20 01:40:58.925109 coreos-metadata[1688]: Jan 20 01:40:58.892 INFO Fetch successful Jan 20 01:40:58.925109 coreos-metadata[1688]: Jan 20 01:40:58.892 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 20 01:40:58.925109 coreos-metadata[1688]: Jan 20 01:40:58.903 INFO Fetch successful Jan 20 01:40:58.761081 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 01:40:58.778811 dbus-daemon[1689]: [system] SELinux support is enabled Jan 20 01:40:58.769894 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 01:40:58.770370 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 01:40:58.778077 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 01:40:58.925918 update_engine[1716]: I20260120 01:40:58.876775 1716 main.cc:92] Flatcar Update Engine starting Jan 20 01:40:58.925918 update_engine[1716]: I20260120 01:40:58.887701 1716 update_check_scheduler.cc:74] Next update check in 11m36s Jan 20 01:40:58.802224 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 01:40:58.926211 jq[1721]: true Jan 20 01:40:58.817137 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 01:40:58.828344 systemd[1]: Started chronyd.service - NTP client/server. Jan 20 01:40:58.845824 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 01:40:58.846001 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 01:40:58.846238 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 01:40:58.846365 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 01:40:58.894349 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 01:40:58.894917 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 01:40:58.902434 systemd-logind[1712]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 01:40:58.903055 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 01:40:58.915076 systemd-logind[1712]: New seat seat0. Jan 20 01:40:58.921308 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 01:40:58.966916 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 01:40:58.968943 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 01:40:59.008804 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 20 01:40:59.013800 (ntainerd)[1767]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 20 01:40:59.024201 jq[1766]: true Jan 20 01:40:59.050219 dbus-daemon[1689]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 20 01:40:59.054376 tar[1758]: linux-arm64/LICENSE Jan 20 01:40:59.056661 tar[1758]: linux-arm64/helm Jan 20 01:40:59.063498 systemd[1]: Started update-engine.service - Update Engine. Jan 20 01:40:59.079930 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 01:40:59.080133 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 01:40:59.080258 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 01:40:59.091300 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 01:40:59.091414 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 01:40:59.104226 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 01:40:59.124548 sshd_keygen[1723]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 01:40:59.148414 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 01:40:59.159227 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 01:40:59.166638 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 20 01:40:59.192623 bash[1808]: Updated "/home/core/.ssh/authorized_keys" Jan 20 01:40:59.193350 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 01:40:59.193991 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 01:40:59.208944 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 01:40:59.220434 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 20 01:40:59.228540 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 01:40:59.241073 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 20 01:40:59.253531 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 01:40:59.261627 locksmithd[1807]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 01:40:59.271015 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 01:40:59.279322 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 20 01:40:59.287925 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 01:40:59.609014 tar[1758]: linux-arm64/README.md Jan 20 01:40:59.622820 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 01:40:59.703447 containerd[1767]: time="2026-01-20T01:40:59.702865340Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 20 01:40:59.729789 containerd[1767]: time="2026-01-20T01:40:59.729743980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 20 01:40:59.731244 containerd[1767]: time="2026-01-20T01:40:59.731200100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 20 01:40:59.731421 containerd[1767]: time="2026-01-20T01:40:59.731400740Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 20 01:40:59.731490 containerd[1767]: time="2026-01-20T01:40:59.731478620Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 20 01:40:59.733172 containerd[1767]: time="2026-01-20T01:40:59.732298860Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 20 01:40:59.733172 containerd[1767]: time="2026-01-20T01:40:59.732325540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 20 01:40:59.733172 containerd[1767]: time="2026-01-20T01:40:59.732394260Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 01:40:59.733172 containerd[1767]: time="2026-01-20T01:40:59.732406100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 20 01:40:59.733408 containerd[1767]: time="2026-01-20T01:40:59.733386620Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 01:40:59.733928 containerd[1767]: time="2026-01-20T01:40:59.733912740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 20 01:40:59.733990 containerd[1767]: time="2026-01-20T01:40:59.733977180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 01:40:59.734052 containerd[1767]: time="2026-01-20T01:40:59.734039300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 20 01:40:59.734198 containerd[1767]: time="2026-01-20T01:40:59.734181340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 20 01:40:59.734972 containerd[1767]: time="2026-01-20T01:40:59.734950780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 20 01:40:59.735474 containerd[1767]: time="2026-01-20T01:40:59.735448540Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 01:40:59.736753 containerd[1767]: time="2026-01-20T01:40:59.736721820Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 20 01:40:59.736879 containerd[1767]: time="2026-01-20T01:40:59.736842620Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 20 01:40:59.736944 containerd[1767]: time="2026-01-20T01:40:59.736891100Z" level=info msg="metadata content store policy set" policy=shared Jan 20 01:40:59.748927 containerd[1767]: time="2026-01-20T01:40:59.748677500Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 20 01:40:59.748927 containerd[1767]: time="2026-01-20T01:40:59.748726340Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 20 01:40:59.748927 containerd[1767]: time="2026-01-20T01:40:59.748743940Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 20 01:40:59.748927 containerd[1767]: time="2026-01-20T01:40:59.748758260Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 20 01:40:59.748927 containerd[1767]: time="2026-01-20T01:40:59.748771380Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 20 01:40:59.749118 containerd[1767]: time="2026-01-20T01:40:59.748932580Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 20 01:40:59.749513 containerd[1767]: time="2026-01-20T01:40:59.749145860Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 20 01:40:59.749513 containerd[1767]: time="2026-01-20T01:40:59.749272900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 20 01:40:59.749513 containerd[1767]: time="2026-01-20T01:40:59.749290380Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 20 01:40:59.749513 containerd[1767]: time="2026-01-20T01:40:59.749303740Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 20 01:40:59.749513 containerd[1767]: time="2026-01-20T01:40:59.749316580Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 20 01:40:59.749513 containerd[1767]: time="2026-01-20T01:40:59.749329180Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 20 01:40:59.749513 containerd[1767]: time="2026-01-20T01:40:59.749342340Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 20 01:40:59.749513 containerd[1767]: time="2026-01-20T01:40:59.749356820Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 20 01:40:59.749513 containerd[1767]: time="2026-01-20T01:40:59.749371740Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 20 01:40:59.749513 containerd[1767]: time="2026-01-20T01:40:59.749384220Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 20 01:40:59.749513 containerd[1767]: time="2026-01-20T01:40:59.749396260Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 20 01:40:59.749513 containerd[1767]: time="2026-01-20T01:40:59.749408420Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 20 01:40:59.749513 containerd[1767]: time="2026-01-20T01:40:59.749428500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 20 01:40:59.749513 containerd[1767]: time="2026-01-20T01:40:59.749442060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 20 01:40:59.750337 containerd[1767]: time="2026-01-20T01:40:59.749454020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 20 01:40:59.750337 containerd[1767]: time="2026-01-20T01:40:59.749466020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 20 01:40:59.750337 containerd[1767]: time="2026-01-20T01:40:59.749477620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 20 01:40:59.750337 containerd[1767]: time="2026-01-20T01:40:59.749490260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 20 01:40:59.750337 containerd[1767]: time="2026-01-20T01:40:59.749501980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 20 01:40:59.750337 containerd[1767]: time="2026-01-20T01:40:59.749514860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 20 01:40:59.750337 containerd[1767]: time="2026-01-20T01:40:59.749528260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 20 01:40:59.750337 containerd[1767]: time="2026-01-20T01:40:59.749553700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 20 01:40:59.750337 containerd[1767]: time="2026-01-20T01:40:59.749565780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 20 01:40:59.750337 containerd[1767]: time="2026-01-20T01:40:59.749577620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 20 01:40:59.750337 containerd[1767]: time="2026-01-20T01:40:59.749610860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 20 01:40:59.750337 containerd[1767]: time="2026-01-20T01:40:59.749628060Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 20 01:40:59.750337 containerd[1767]: time="2026-01-20T01:40:59.749648300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 20 01:40:59.750337 containerd[1767]: time="2026-01-20T01:40:59.749659940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 20 01:40:59.750337 containerd[1767]: time="2026-01-20T01:40:59.749670420Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 20 01:40:59.750637 containerd[1767]: time="2026-01-20T01:40:59.749728540Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 20 01:40:59.750637 containerd[1767]: time="2026-01-20T01:40:59.749758340Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 20 01:40:59.750637 containerd[1767]: time="2026-01-20T01:40:59.749769420Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 20 01:40:59.750637 containerd[1767]: time="2026-01-20T01:40:59.749780940Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 20 01:40:59.750637 containerd[1767]: time="2026-01-20T01:40:59.749790300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 20 01:40:59.750637 containerd[1767]: time="2026-01-20T01:40:59.749806940Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 20 01:40:59.750637 containerd[1767]: time="2026-01-20T01:40:59.749816820Z" level=info msg="NRI interface is disabled by configuration." Jan 20 01:40:59.750637 containerd[1767]: time="2026-01-20T01:40:59.749830460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 20 01:40:59.750793 containerd[1767]: time="2026-01-20T01:40:59.750267580Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 20 01:40:59.750793 containerd[1767]: time="2026-01-20T01:40:59.750362060Z" level=info msg="Connect containerd service" Jan 20 01:40:59.750793 containerd[1767]: time="2026-01-20T01:40:59.750396500Z" level=info msg="using legacy CRI server" Jan 20 01:40:59.750793 containerd[1767]: time="2026-01-20T01:40:59.750409940Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 01:40:59.750793 containerd[1767]: time="2026-01-20T01:40:59.750606700Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 20 01:40:59.752071 containerd[1767]: time="2026-01-20T01:40:59.751911700Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 01:40:59.757392 containerd[1767]: time="2026-01-20T01:40:59.752291500Z" level=info msg="Start subscribing containerd event" Jan 20 01:40:59.757392 containerd[1767]: time="2026-01-20T01:40:59.752558580Z" level=info msg="Start recovering state" Jan 20 01:40:59.757392 containerd[1767]: time="2026-01-20T01:40:59.752623940Z" level=info msg="Start event monitor" Jan 20 01:40:59.757392 containerd[1767]: time="2026-01-20T01:40:59.752634180Z" level=info msg="Start snapshots syncer" Jan 20 01:40:59.757392 containerd[1767]: time="2026-01-20T01:40:59.752642780Z" level=info msg="Start cni network conf syncer for default" Jan 20 01:40:59.757392 containerd[1767]: time="2026-01-20T01:40:59.752649620Z" level=info msg="Start streaming server" Jan 20 01:40:59.757392 containerd[1767]: time="2026-01-20T01:40:59.752453380Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 01:40:59.757392 containerd[1767]: time="2026-01-20T01:40:59.752788780Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 01:40:59.757392 containerd[1767]: time="2026-01-20T01:40:59.752835220Z" level=info msg="containerd successfully booted in 0.050679s" Jan 20 01:40:59.752933 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 01:40:59.828516 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:40:59.834028 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 01:40:59.834200 (kubelet)[1854]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:40:59.839483 systemd[1]: Startup finished in 589ms (kernel) + 11.965s (initrd) + 11.094s (userspace) = 23.649s. Jan 20 01:41:00.157195 login[1836]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:41:00.157907 login[1837]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:41:00.166398 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 01:41:00.175100 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 01:41:00.177553 systemd-logind[1712]: New session 1 of user core. Jan 20 01:41:00.181294 systemd-logind[1712]: New session 2 of user core. Jan 20 01:41:00.199577 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 01:41:00.207280 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 01:41:00.209851 (systemd)[1865]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 20 01:41:00.272888 kubelet[1854]: E0120 01:41:00.272836 1854 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:41:00.275534 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:41:00.275667 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:41:00.326454 systemd[1865]: Queued start job for default target default.target. Jan 20 01:41:00.335720 systemd[1865]: Created slice app.slice - User Application Slice. Jan 20 01:41:00.335746 systemd[1865]: Reached target paths.target - Paths. Jan 20 01:41:00.335758 systemd[1865]: Reached target timers.target - Timers. Jan 20 01:41:00.336852 systemd[1865]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 01:41:00.346484 systemd[1865]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 01:41:00.346621 systemd[1865]: Reached target sockets.target - Sockets. Jan 20 01:41:00.346688 systemd[1865]: Reached target basic.target - Basic System. Jan 20 01:41:00.346775 systemd[1865]: Reached target default.target - Main User Target. Jan 20 01:41:00.346866 systemd[1865]: Startup finished in 131ms. Jan 20 01:41:00.346981 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 01:41:00.353043 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 01:41:00.354523 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 01:41:01.431411 waagent[1829]: 2026-01-20T01:41:01.431329Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 20 01:41:01.435761 waagent[1829]: 2026-01-20T01:41:01.435711Z INFO Daemon Daemon OS: flatcar 4081.3.6 Jan 20 01:41:01.439418 waagent[1829]: 2026-01-20T01:41:01.439377Z INFO Daemon Daemon Python: 3.11.9 Jan 20 01:41:01.442719 waagent[1829]: 2026-01-20T01:41:01.442673Z INFO Daemon Daemon Run daemon Jan 20 01:41:01.445823 waagent[1829]: 2026-01-20T01:41:01.445752Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Jan 20 01:41:01.452415 waagent[1829]: 2026-01-20T01:41:01.452375Z INFO Daemon Daemon Using waagent for provisioning Jan 20 01:41:01.456502 waagent[1829]: 2026-01-20T01:41:01.456466Z INFO Daemon Daemon Activate resource disk Jan 20 01:41:01.460055 waagent[1829]: 2026-01-20T01:41:01.460020Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 20 01:41:01.469018 waagent[1829]: 2026-01-20T01:41:01.468974Z INFO Daemon Daemon Found device: None Jan 20 01:41:01.472382 waagent[1829]: 2026-01-20T01:41:01.472341Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 20 01:41:01.478637 waagent[1829]: 2026-01-20T01:41:01.478602Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 20 01:41:01.488550 waagent[1829]: 2026-01-20T01:41:01.488493Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 20 01:41:01.492904 waagent[1829]: 2026-01-20T01:41:01.492863Z INFO Daemon Daemon Running default provisioning handler Jan 20 01:41:01.503374 waagent[1829]: 2026-01-20T01:41:01.503324Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 20 01:41:01.513622 waagent[1829]: 2026-01-20T01:41:01.513574Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 20 01:41:01.520866 waagent[1829]: 2026-01-20T01:41:01.520830Z INFO Daemon Daemon cloud-init is enabled: False Jan 20 01:41:01.524641 waagent[1829]: 2026-01-20T01:41:01.524608Z INFO Daemon Daemon Copying ovf-env.xml Jan 20 01:41:01.650928 waagent[1829]: 2026-01-20T01:41:01.646539Z INFO Daemon Daemon Successfully mounted dvd Jan 20 01:41:01.674285 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 20 01:41:01.676922 waagent[1829]: 2026-01-20T01:41:01.676175Z INFO Daemon Daemon Detect protocol endpoint Jan 20 01:41:01.680011 waagent[1829]: 2026-01-20T01:41:01.679968Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 20 01:41:01.684578 waagent[1829]: 2026-01-20T01:41:01.684507Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 20 01:41:01.689347 waagent[1829]: 2026-01-20T01:41:01.689311Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 20 01:41:01.693777 waagent[1829]: 2026-01-20T01:41:01.693740Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 20 01:41:01.697572 waagent[1829]: 2026-01-20T01:41:01.697538Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 20 01:41:01.743511 waagent[1829]: 2026-01-20T01:41:01.743470Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 20 01:41:01.748480 waagent[1829]: 2026-01-20T01:41:01.748456Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 20 01:41:01.752331 waagent[1829]: 2026-01-20T01:41:01.752296Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 20 01:41:02.190930 waagent[1829]: 2026-01-20T01:41:02.190280Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 20 01:41:02.195514 waagent[1829]: 2026-01-20T01:41:02.195464Z INFO Daemon Daemon Forcing an update of the goal state. Jan 20 01:41:02.203230 waagent[1829]: 2026-01-20T01:41:02.203182Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 20 01:41:02.219538 waagent[1829]: 2026-01-20T01:41:02.219492Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 20 01:41:02.224169 waagent[1829]: 2026-01-20T01:41:02.224124Z INFO Daemon Jan 20 01:41:02.226353 waagent[1829]: 2026-01-20T01:41:02.226308Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: f6b5837f-5df9-4a8c-bc82-ee0c557fb185 eTag: 13402238424620113387 source: Fabric] Jan 20 01:41:02.235068 waagent[1829]: 2026-01-20T01:41:02.235024Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 20 01:41:02.240221 waagent[1829]: 2026-01-20T01:41:02.240176Z INFO Daemon Jan 20 01:41:02.242324 waagent[1829]: 2026-01-20T01:41:02.242280Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 20 01:41:02.250394 waagent[1829]: 2026-01-20T01:41:02.250358Z INFO Daemon Daemon Downloading artifacts profile blob Jan 20 01:41:02.320379 waagent[1829]: 2026-01-20T01:41:02.320310Z INFO Daemon Downloaded certificate {'thumbprint': 'EF87708046FBD99058DE94840C7731114875FC25', 'hasPrivateKey': True} Jan 20 01:41:02.328385 waagent[1829]: 2026-01-20T01:41:02.328335Z INFO Daemon Fetch goal state completed Jan 20 01:41:02.337450 waagent[1829]: 2026-01-20T01:41:02.337381Z INFO Daemon Daemon Starting provisioning Jan 20 01:41:02.341325 waagent[1829]: 2026-01-20T01:41:02.341279Z INFO Daemon Daemon Handle ovf-env.xml. Jan 20 01:41:02.345050 waagent[1829]: 2026-01-20T01:41:02.345007Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-0046389dc1] Jan 20 01:41:02.351296 waagent[1829]: 2026-01-20T01:41:02.351246Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-0046389dc1] Jan 20 01:41:02.356375 waagent[1829]: 2026-01-20T01:41:02.356329Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 20 01:41:02.361332 waagent[1829]: 2026-01-20T01:41:02.361289Z INFO Daemon Daemon Primary interface is [eth0] Jan 20 01:41:02.405999 systemd-networkd[1367]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:41:02.406006 systemd-networkd[1367]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 01:41:02.406035 systemd-networkd[1367]: eth0: DHCP lease lost Jan 20 01:41:02.409933 waagent[1829]: 2026-01-20T01:41:02.407268Z INFO Daemon Daemon Create user account if not exists Jan 20 01:41:02.412373 waagent[1829]: 2026-01-20T01:41:02.412316Z INFO Daemon Daemon User core already exists, skip useradd Jan 20 01:41:02.412442 systemd-networkd[1367]: eth0: DHCPv6 lease lost Jan 20 01:41:02.416868 waagent[1829]: 2026-01-20T01:41:02.416812Z INFO Daemon Daemon Configure sudoer Jan 20 01:41:02.421052 waagent[1829]: 2026-01-20T01:41:02.420998Z INFO Daemon Daemon Configure sshd Jan 20 01:41:02.424712 waagent[1829]: 2026-01-20T01:41:02.424660Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 20 01:41:02.435005 waagent[1829]: 2026-01-20T01:41:02.434962Z INFO Daemon Daemon Deploy ssh public key. Jan 20 01:41:02.447967 systemd-networkd[1367]: eth0: DHCPv4 address 10.200.20.33/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 20 01:41:03.539032 waagent[1829]: 2026-01-20T01:41:03.538967Z INFO Daemon Daemon Provisioning complete Jan 20 01:41:03.553949 waagent[1829]: 2026-01-20T01:41:03.553882Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 20 01:41:03.558746 waagent[1829]: 2026-01-20T01:41:03.558704Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 20 01:41:03.566011 waagent[1829]: 2026-01-20T01:41:03.565972Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 20 01:41:03.691605 waagent[1916]: 2026-01-20T01:41:03.691018Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 20 01:41:03.691605 waagent[1916]: 2026-01-20T01:41:03.691151Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Jan 20 01:41:03.691605 waagent[1916]: 2026-01-20T01:41:03.691203Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 20 01:41:03.728924 waagent[1916]: 2026-01-20T01:41:03.728038Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 20 01:41:03.728924 waagent[1916]: 2026-01-20T01:41:03.728250Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 20 01:41:03.728924 waagent[1916]: 2026-01-20T01:41:03.728310Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 20 01:41:03.735547 waagent[1916]: 2026-01-20T01:41:03.735491Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 20 01:41:03.740482 waagent[1916]: 2026-01-20T01:41:03.740446Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 20 01:41:03.740901 waagent[1916]: 2026-01-20T01:41:03.740856Z INFO ExtHandler Jan 20 01:41:03.740984 waagent[1916]: 2026-01-20T01:41:03.740955Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 32d0faaa-bbff-43ee-b9fb-382bad416716 eTag: 13402238424620113387 source: Fabric] Jan 20 01:41:03.741269 waagent[1916]: 2026-01-20T01:41:03.741234Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 20 01:41:03.741826 waagent[1916]: 2026-01-20T01:41:03.741784Z INFO ExtHandler Jan 20 01:41:03.741885 waagent[1916]: 2026-01-20T01:41:03.741860Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 20 01:41:03.744690 waagent[1916]: 2026-01-20T01:41:03.744661Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 20 01:41:03.807192 waagent[1916]: 2026-01-20T01:41:03.807086Z INFO ExtHandler Downloaded certificate {'thumbprint': 'EF87708046FBD99058DE94840C7731114875FC25', 'hasPrivateKey': True} Jan 20 01:41:03.808640 waagent[1916]: 2026-01-20T01:41:03.807777Z INFO ExtHandler Fetch goal state completed Jan 20 01:41:03.822139 waagent[1916]: 2026-01-20T01:41:03.822083Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1916 Jan 20 01:41:03.822834 waagent[1916]: 2026-01-20T01:41:03.822356Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 20 01:41:03.823961 waagent[1916]: 2026-01-20T01:41:03.823918Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Jan 20 01:41:03.824314 waagent[1916]: 2026-01-20T01:41:03.824277Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 20 01:41:04.327087 waagent[1916]: 2026-01-20T01:41:04.327044Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 20 01:41:04.327281 waagent[1916]: 2026-01-20T01:41:04.327244Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 20 01:41:04.333451 waagent[1916]: 2026-01-20T01:41:04.333412Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 20 01:41:04.339371 systemd[1]: Reloading requested from client PID 1929 ('systemctl') (unit waagent.service)... Jan 20 01:41:04.339583 systemd[1]: Reloading... Jan 20 01:41:04.411930 zram_generator::config[1963]: No configuration found. Jan 20 01:41:04.510208 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 01:41:04.584504 systemd[1]: Reloading finished in 244 ms. Jan 20 01:41:04.608923 waagent[1916]: 2026-01-20T01:41:04.606260Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 20 01:41:04.613195 systemd[1]: Reloading requested from client PID 2017 ('systemctl') (unit waagent.service)... Jan 20 01:41:04.613208 systemd[1]: Reloading... Jan 20 01:41:04.678929 zram_generator::config[2051]: No configuration found. Jan 20 01:41:04.786078 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 01:41:04.860052 systemd[1]: Reloading finished in 246 ms. Jan 20 01:41:04.878932 waagent[1916]: 2026-01-20T01:41:04.878298Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 20 01:41:04.878932 waagent[1916]: 2026-01-20T01:41:04.878454Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 20 01:41:05.324276 waagent[1916]: 2026-01-20T01:41:05.323048Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 20 01:41:05.324276 waagent[1916]: 2026-01-20T01:41:05.323646Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 20 01:41:05.324519 waagent[1916]: 2026-01-20T01:41:05.324455Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 20 01:41:05.324676 waagent[1916]: 2026-01-20T01:41:05.324625Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 20 01:41:05.325092 waagent[1916]: 2026-01-20T01:41:05.325036Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 20 01:41:05.325309 waagent[1916]: 2026-01-20T01:41:05.325276Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 20 01:41:05.325309 waagent[1916]: 2026-01-20T01:41:05.325144Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 20 01:41:05.325705 waagent[1916]: 2026-01-20T01:41:05.325663Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 20 01:41:05.326145 waagent[1916]: 2026-01-20T01:41:05.326097Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 20 01:41:05.326145 waagent[1916]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 20 01:41:05.326145 waagent[1916]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 20 01:41:05.326145 waagent[1916]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 20 01:41:05.326145 waagent[1916]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 20 01:41:05.326145 waagent[1916]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 20 01:41:05.326145 waagent[1916]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 20 01:41:05.326626 waagent[1916]: 2026-01-20T01:41:05.326404Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 20 01:41:05.326626 waagent[1916]: 2026-01-20T01:41:05.326551Z INFO EnvHandler ExtHandler Configure routes Jan 20 01:41:05.326626 waagent[1916]: 2026-01-20T01:41:05.326613Z INFO EnvHandler ExtHandler Gateway:None Jan 20 01:41:05.326703 waagent[1916]: 2026-01-20T01:41:05.326655Z INFO EnvHandler ExtHandler Routes:None Jan 20 01:41:05.327050 waagent[1916]: 2026-01-20T01:41:05.326995Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 20 01:41:05.327168 waagent[1916]: 2026-01-20T01:41:05.327132Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 20 01:41:05.327546 waagent[1916]: 2026-01-20T01:41:05.327498Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 20 01:41:05.327689 waagent[1916]: 2026-01-20T01:41:05.327637Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 20 01:41:05.327781 waagent[1916]: 2026-01-20T01:41:05.327743Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 20 01:41:05.334594 waagent[1916]: 2026-01-20T01:41:05.334547Z INFO ExtHandler ExtHandler Jan 20 01:41:05.335925 waagent[1916]: 2026-01-20T01:41:05.334741Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 8daf26db-c85b-433d-9fa5-211608bb5f40 correlation a81b7840-a8be-4d16-9234-e941613b84b8 created: 2026-01-20T01:40:07.847229Z] Jan 20 01:41:05.335925 waagent[1916]: 2026-01-20T01:41:05.335109Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 20 01:41:05.335925 waagent[1916]: 2026-01-20T01:41:05.335631Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 20 01:41:05.361571 waagent[1916]: 2026-01-20T01:41:05.361517Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 9777AAC8-4DE3-4235-90B1-A3DB28447915;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 20 01:41:05.363667 waagent[1916]: 2026-01-20T01:41:05.363604Z INFO MonitorHandler ExtHandler Network interfaces: Jan 20 01:41:05.363667 waagent[1916]: Executing ['ip', '-a', '-o', 'link']: Jan 20 01:41:05.363667 waagent[1916]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 20 01:41:05.363667 waagent[1916]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:87:9f:2e brd ff:ff:ff:ff:ff:ff Jan 20 01:41:05.363667 waagent[1916]: 3: enP63125s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:87:9f:2e brd ff:ff:ff:ff:ff:ff\ altname enP63125p0s2 Jan 20 01:41:05.363667 waagent[1916]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 20 01:41:05.363667 waagent[1916]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 20 01:41:05.363667 waagent[1916]: 2: eth0 inet 10.200.20.33/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 20 01:41:05.363667 waagent[1916]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 20 01:41:05.363667 waagent[1916]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 20 01:41:05.363667 waagent[1916]: 2: eth0 inet6 fe80::7eed:8dff:fe87:9f2e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 20 01:41:05.424971 waagent[1916]: 2026-01-20T01:41:05.424435Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 20 01:41:05.424971 waagent[1916]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 20 01:41:05.424971 waagent[1916]: pkts bytes target prot opt in out source destination Jan 20 01:41:05.424971 waagent[1916]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 20 01:41:05.424971 waagent[1916]: pkts bytes target prot opt in out source destination Jan 20 01:41:05.424971 waagent[1916]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 20 01:41:05.424971 waagent[1916]: pkts bytes target prot opt in out source destination Jan 20 01:41:05.424971 waagent[1916]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 20 01:41:05.424971 waagent[1916]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 20 01:41:05.424971 waagent[1916]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 20 01:41:05.427281 waagent[1916]: 2026-01-20T01:41:05.427224Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 20 01:41:05.427281 waagent[1916]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 20 01:41:05.427281 waagent[1916]: pkts bytes target prot opt in out source destination Jan 20 01:41:05.427281 waagent[1916]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 20 01:41:05.427281 waagent[1916]: pkts bytes target prot opt in out source destination Jan 20 01:41:05.427281 waagent[1916]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 20 01:41:05.427281 waagent[1916]: pkts bytes target prot opt in out source destination Jan 20 01:41:05.427281 waagent[1916]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 20 01:41:05.427281 waagent[1916]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 20 01:41:05.427281 waagent[1916]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 20 01:41:05.427511 waagent[1916]: 2026-01-20T01:41:05.427475Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 20 01:41:10.478163 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 01:41:10.488139 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:41:10.585336 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:41:10.589124 (kubelet)[2144]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:41:10.681448 kubelet[2144]: E0120 01:41:10.681395 2144 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:41:10.684852 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:41:10.685119 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:41:20.728241 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 01:41:20.739057 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:41:20.835686 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:41:20.839128 (kubelet)[2159]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:41:20.936490 kubelet[2159]: E0120 01:41:20.936438 2159 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:41:20.939231 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:41:20.939461 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:41:22.495285 chronyd[1694]: Selected source PHC0 Jan 20 01:41:24.341476 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 01:41:24.342517 systemd[1]: Started sshd@0-10.200.20.33:22-10.200.16.10:39472.service - OpenSSH per-connection server daemon (10.200.16.10:39472). Jan 20 01:41:24.915253 sshd[2167]: Accepted publickey for core from 10.200.16.10 port 39472 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:41:24.916537 sshd[2167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:41:24.921205 systemd-logind[1712]: New session 3 of user core. Jan 20 01:41:24.927048 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 01:41:25.344046 systemd[1]: Started sshd@1-10.200.20.33:22-10.200.16.10:39482.service - OpenSSH per-connection server daemon (10.200.16.10:39482). Jan 20 01:41:25.834235 sshd[2172]: Accepted publickey for core from 10.200.16.10 port 39482 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:41:25.835530 sshd[2172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:41:25.840079 systemd-logind[1712]: New session 4 of user core. Jan 20 01:41:25.846029 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 01:41:26.184586 sshd[2172]: pam_unix(sshd:session): session closed for user core Jan 20 01:41:26.188208 systemd[1]: sshd@1-10.200.20.33:22-10.200.16.10:39482.service: Deactivated successfully. Jan 20 01:41:26.189716 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 01:41:26.191352 systemd-logind[1712]: Session 4 logged out. Waiting for processes to exit. Jan 20 01:41:26.192365 systemd-logind[1712]: Removed session 4. Jan 20 01:41:26.268823 systemd[1]: Started sshd@2-10.200.20.33:22-10.200.16.10:39484.service - OpenSSH per-connection server daemon (10.200.16.10:39484). Jan 20 01:41:26.720678 sshd[2179]: Accepted publickey for core from 10.200.16.10 port 39484 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:41:26.721999 sshd[2179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:41:26.725558 systemd-logind[1712]: New session 5 of user core. Jan 20 01:41:26.736010 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 01:41:27.052032 sshd[2179]: pam_unix(sshd:session): session closed for user core Jan 20 01:41:27.055383 systemd[1]: sshd@2-10.200.20.33:22-10.200.16.10:39484.service: Deactivated successfully. Jan 20 01:41:27.057382 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 01:41:27.058118 systemd-logind[1712]: Session 5 logged out. Waiting for processes to exit. Jan 20 01:41:27.059018 systemd-logind[1712]: Removed session 5. Jan 20 01:41:27.120209 systemd[1]: Started sshd@3-10.200.20.33:22-10.200.16.10:39496.service - OpenSSH per-connection server daemon (10.200.16.10:39496). Jan 20 01:41:27.532440 sshd[2186]: Accepted publickey for core from 10.200.16.10 port 39496 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:41:27.533718 sshd[2186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:41:27.537620 systemd-logind[1712]: New session 6 of user core. Jan 20 01:41:27.548070 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 01:41:27.841061 sshd[2186]: pam_unix(sshd:session): session closed for user core Jan 20 01:41:27.844506 systemd[1]: sshd@3-10.200.20.33:22-10.200.16.10:39496.service: Deactivated successfully. Jan 20 01:41:27.845860 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 01:41:27.847120 systemd-logind[1712]: Session 6 logged out. Waiting for processes to exit. Jan 20 01:41:27.848107 systemd-logind[1712]: Removed session 6. Jan 20 01:41:27.938223 systemd[1]: Started sshd@4-10.200.20.33:22-10.200.16.10:39512.service - OpenSSH per-connection server daemon (10.200.16.10:39512). Jan 20 01:41:28.389985 sshd[2193]: Accepted publickey for core from 10.200.16.10 port 39512 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:41:28.391226 sshd[2193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:41:28.395780 systemd-logind[1712]: New session 7 of user core. Jan 20 01:41:28.402093 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 01:41:28.784434 sudo[2196]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 01:41:28.784718 sudo[2196]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:41:28.808029 sudo[2196]: pam_unix(sudo:session): session closed for user root Jan 20 01:41:28.886081 sshd[2193]: pam_unix(sshd:session): session closed for user core Jan 20 01:41:28.889778 systemd[1]: sshd@4-10.200.20.33:22-10.200.16.10:39512.service: Deactivated successfully. Jan 20 01:41:28.891268 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 01:41:28.891805 systemd-logind[1712]: Session 7 logged out. Waiting for processes to exit. Jan 20 01:41:28.892633 systemd-logind[1712]: Removed session 7. Jan 20 01:41:28.965285 systemd[1]: Started sshd@5-10.200.20.33:22-10.200.16.10:39518.service - OpenSSH per-connection server daemon (10.200.16.10:39518). Jan 20 01:41:29.419332 sshd[2201]: Accepted publickey for core from 10.200.16.10 port 39518 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:41:29.421005 sshd[2201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:41:29.424479 systemd-logind[1712]: New session 8 of user core. Jan 20 01:41:29.435208 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 01:41:29.677347 sudo[2205]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 01:41:29.678149 sudo[2205]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:41:29.681311 sudo[2205]: pam_unix(sudo:session): session closed for user root Jan 20 01:41:29.685627 sudo[2204]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 20 01:41:29.685877 sudo[2204]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:41:29.698500 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 20 01:41:29.699306 auditctl[2208]: No rules Jan 20 01:41:29.699700 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 01:41:29.699836 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 20 01:41:29.701963 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 20 01:41:29.724075 augenrules[2226]: No rules Jan 20 01:41:29.725425 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 20 01:41:29.727299 sudo[2204]: pam_unix(sudo:session): session closed for user root Jan 20 01:41:29.798862 sshd[2201]: pam_unix(sshd:session): session closed for user core Jan 20 01:41:29.801498 systemd[1]: sshd@5-10.200.20.33:22-10.200.16.10:39518.service: Deactivated successfully. Jan 20 01:41:29.802850 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 01:41:29.804201 systemd-logind[1712]: Session 8 logged out. Waiting for processes to exit. Jan 20 01:41:29.805253 systemd-logind[1712]: Removed session 8. Jan 20 01:41:29.894107 systemd[1]: Started sshd@6-10.200.20.33:22-10.200.16.10:53114.service - OpenSSH per-connection server daemon (10.200.16.10:53114). Jan 20 01:41:30.384337 sshd[2234]: Accepted publickey for core from 10.200.16.10 port 53114 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:41:30.385606 sshd[2234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:41:30.389336 systemd-logind[1712]: New session 9 of user core. Jan 20 01:41:30.397054 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 01:41:30.660554 sudo[2237]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 01:41:30.661063 sudo[2237]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:41:30.978170 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 01:41:30.990067 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:41:31.566189 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:41:31.575144 (kubelet)[2254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:41:31.611095 kubelet[2254]: E0120 01:41:31.611042 2254 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:41:31.613813 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:41:31.613982 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:41:32.351360 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 01:41:32.352162 (dockerd)[2267]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 01:41:32.920010 dockerd[2267]: time="2026-01-20T01:41:32.919956175Z" level=info msg="Starting up" Jan 20 01:41:33.502258 dockerd[2267]: time="2026-01-20T01:41:33.502215192Z" level=info msg="Loading containers: start." Jan 20 01:41:33.689929 kernel: Initializing XFRM netlink socket Jan 20 01:41:33.845893 systemd-networkd[1367]: docker0: Link UP Jan 20 01:41:33.863007 dockerd[2267]: time="2026-01-20T01:41:33.862961520Z" level=info msg="Loading containers: done." Jan 20 01:41:33.873807 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4176543982-merged.mount: Deactivated successfully. Jan 20 01:41:33.888568 dockerd[2267]: time="2026-01-20T01:41:33.888519712Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 01:41:33.888698 dockerd[2267]: time="2026-01-20T01:41:33.888665312Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 20 01:41:33.888829 dockerd[2267]: time="2026-01-20T01:41:33.888805911Z" level=info msg="Daemon has completed initialization" Jan 20 01:41:33.937956 dockerd[2267]: time="2026-01-20T01:41:33.937883216Z" level=info msg="API listen on /run/docker.sock" Jan 20 01:41:33.938723 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 01:41:34.629693 containerd[1767]: time="2026-01-20T01:41:34.629648880Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 20 01:41:35.349941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3198291847.mount: Deactivated successfully. Jan 20 01:41:36.818952 containerd[1767]: time="2026-01-20T01:41:36.818326235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:36.820338 containerd[1767]: time="2026-01-20T01:41:36.820310274Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=26441982" Jan 20 01:41:36.822596 containerd[1767]: time="2026-01-20T01:41:36.822547154Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:36.827294 containerd[1767]: time="2026-01-20T01:41:36.826942552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:36.827987 containerd[1767]: time="2026-01-20T01:41:36.827956272Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 2.197864552s" Jan 20 01:41:36.828053 containerd[1767]: time="2026-01-20T01:41:36.827989952Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 20 01:41:36.829023 containerd[1767]: time="2026-01-20T01:41:36.828812872Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 20 01:41:38.215606 containerd[1767]: time="2026-01-20T01:41:38.215548038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:38.217764 containerd[1767]: time="2026-01-20T01:41:38.217737157Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22622086" Jan 20 01:41:38.220152 containerd[1767]: time="2026-01-20T01:41:38.220092517Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:38.225569 containerd[1767]: time="2026-01-20T01:41:38.224275635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:38.225569 containerd[1767]: time="2026-01-20T01:41:38.225265435Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.396140003s" Jan 20 01:41:38.225569 containerd[1767]: time="2026-01-20T01:41:38.225294555Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 20 01:41:38.226090 containerd[1767]: time="2026-01-20T01:41:38.226068795Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 20 01:41:39.328512 containerd[1767]: time="2026-01-20T01:41:39.328465526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:39.331163 containerd[1767]: time="2026-01-20T01:41:39.331112169Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17616747" Jan 20 01:41:39.333951 containerd[1767]: time="2026-01-20T01:41:39.333872533Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:39.338151 containerd[1767]: time="2026-01-20T01:41:39.338105817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:39.339287 containerd[1767]: time="2026-01-20T01:41:39.339175819Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.113001064s" Jan 20 01:41:39.339287 containerd[1767]: time="2026-01-20T01:41:39.339206259Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 20 01:41:39.340187 containerd[1767]: time="2026-01-20T01:41:39.340162860Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 20 01:41:40.287139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1843458316.mount: Deactivated successfully. Jan 20 01:41:40.561855 containerd[1767]: time="2026-01-20T01:41:40.561743157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:40.563649 containerd[1767]: time="2026-01-20T01:41:40.563499479Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558724" Jan 20 01:41:40.565538 containerd[1767]: time="2026-01-20T01:41:40.565471682Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:40.568991 containerd[1767]: time="2026-01-20T01:41:40.568947806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:40.569788 containerd[1767]: time="2026-01-20T01:41:40.569630287Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.229344587s" Jan 20 01:41:40.569788 containerd[1767]: time="2026-01-20T01:41:40.569663767Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 20 01:41:40.570166 containerd[1767]: time="2026-01-20T01:41:40.570084407Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 20 01:41:41.254381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3699819405.mount: Deactivated successfully. Jan 20 01:41:41.728082 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 20 01:41:41.735040 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:41:41.805216 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 20 01:41:42.765093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:41:42.768538 (kubelet)[2537]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:41:42.801264 kubelet[2537]: E0120 01:41:42.801213 2537 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:41:42.804370 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:41:42.804513 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:41:42.929927 containerd[1767]: time="2026-01-20T01:41:42.929834306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:42.932170 containerd[1767]: time="2026-01-20T01:41:42.932142625Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jan 20 01:41:42.934779 containerd[1767]: time="2026-01-20T01:41:42.934724304Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:42.938665 containerd[1767]: time="2026-01-20T01:41:42.938622382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:42.940445 containerd[1767]: time="2026-01-20T01:41:42.940323501Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.370211694s" Jan 20 01:41:42.940445 containerd[1767]: time="2026-01-20T01:41:42.940355021Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 20 01:41:42.941317 containerd[1767]: time="2026-01-20T01:41:42.940969581Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 01:41:44.031755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2711997053.mount: Deactivated successfully. Jan 20 01:41:44.050689 containerd[1767]: time="2026-01-20T01:41:44.049926169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:44.053215 containerd[1767]: time="2026-01-20T01:41:44.053186328Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 20 01:41:44.056019 containerd[1767]: time="2026-01-20T01:41:44.055970647Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:44.059688 containerd[1767]: time="2026-01-20T01:41:44.059650445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:44.060626 containerd[1767]: time="2026-01-20T01:41:44.060298885Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.119298504s" Jan 20 01:41:44.060626 containerd[1767]: time="2026-01-20T01:41:44.060329285Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 20 01:41:44.061082 containerd[1767]: time="2026-01-20T01:41:44.060932245Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 20 01:41:44.201272 update_engine[1716]: I20260120 01:41:44.201210 1716 update_attempter.cc:509] Updating boot flags... Jan 20 01:41:44.242932 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2560) Jan 20 01:41:44.336997 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2560) Jan 20 01:41:44.648369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3172127540.mount: Deactivated successfully. Jan 20 01:41:46.891596 containerd[1767]: time="2026-01-20T01:41:46.891528451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:46.893984 containerd[1767]: time="2026-01-20T01:41:46.893956570Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Jan 20 01:41:46.896035 containerd[1767]: time="2026-01-20T01:41:46.896012170Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:46.900636 containerd[1767]: time="2026-01-20T01:41:46.900603449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:41:46.901670 containerd[1767]: time="2026-01-20T01:41:46.901403769Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.840444204s" Jan 20 01:41:46.901670 containerd[1767]: time="2026-01-20T01:41:46.901435649Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 20 01:41:52.432845 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:41:52.440071 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:41:52.461012 systemd[1]: Reloading requested from client PID 2694 ('systemctl') (unit session-9.scope)... Jan 20 01:41:52.461145 systemd[1]: Reloading... Jan 20 01:41:52.554923 zram_generator::config[2741]: No configuration found. Jan 20 01:41:52.654714 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 01:41:52.731440 systemd[1]: Reloading finished in 269 ms. Jan 20 01:41:52.783703 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:41:52.787556 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 01:41:52.787862 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:41:52.794221 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:41:52.940991 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:41:52.945175 (kubelet)[2804]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 01:41:52.976355 kubelet[2804]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:41:52.976670 kubelet[2804]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 01:41:52.976670 kubelet[2804]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:41:52.976920 kubelet[2804]: I0120 01:41:52.976873 2804 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 01:41:53.419186 kubelet[2804]: I0120 01:41:53.419148 2804 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 01:41:53.419186 kubelet[2804]: I0120 01:41:53.419177 2804 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 01:41:53.419590 kubelet[2804]: I0120 01:41:53.419570 2804 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 01:41:53.440006 kubelet[2804]: E0120 01:41:53.439973 2804 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:41:53.442500 kubelet[2804]: I0120 01:41:53.442476 2804 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 01:41:53.450001 kubelet[2804]: E0120 01:41:53.448709 2804 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 20 01:41:53.450001 kubelet[2804]: I0120 01:41:53.448737 2804 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 20 01:41:53.452124 kubelet[2804]: I0120 01:41:53.452103 2804 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 01:41:53.452945 kubelet[2804]: I0120 01:41:53.452894 2804 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 01:41:53.453196 kubelet[2804]: I0120 01:41:53.453032 2804 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-0046389dc1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 01:41:53.453333 kubelet[2804]: I0120 01:41:53.453322 2804 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 01:41:53.453384 kubelet[2804]: I0120 01:41:53.453377 2804 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 01:41:53.453536 kubelet[2804]: I0120 01:41:53.453526 2804 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:41:53.456313 kubelet[2804]: I0120 01:41:53.456298 2804 kubelet.go:446] "Attempting to sync node with API server" Jan 20 01:41:53.456486 kubelet[2804]: I0120 01:41:53.456475 2804 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 01:41:53.456555 kubelet[2804]: I0120 01:41:53.456548 2804 kubelet.go:352] "Adding apiserver pod source" Jan 20 01:41:53.456609 kubelet[2804]: I0120 01:41:53.456599 2804 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 01:41:53.463253 kubelet[2804]: I0120 01:41:53.463236 2804 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 20 01:41:53.463774 kubelet[2804]: I0120 01:41:53.463761 2804 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 01:41:53.463882 kubelet[2804]: W0120 01:41:53.463872 2804 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 01:41:53.464454 kubelet[2804]: I0120 01:41:53.464435 2804 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 01:41:53.464552 kubelet[2804]: I0120 01:41:53.464544 2804 server.go:1287] "Started kubelet" Jan 20 01:41:53.464750 kubelet[2804]: W0120 01:41:53.464714 2804 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-0046389dc1&limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 20 01:41:53.464839 kubelet[2804]: E0120 01:41:53.464824 2804 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-0046389dc1&limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:41:53.470034 kubelet[2804]: I0120 01:41:53.470013 2804 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 01:41:53.474795 kubelet[2804]: W0120 01:41:53.473825 2804 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 20 01:41:53.474795 kubelet[2804]: E0120 01:41:53.473892 2804 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:41:53.474795 kubelet[2804]: I0120 01:41:53.474017 2804 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 01:41:53.475548 kubelet[2804]: I0120 01:41:53.475487 2804 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 01:41:53.475764 kubelet[2804]: I0120 01:41:53.475739 2804 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 01:41:53.476125 kubelet[2804]: I0120 01:41:53.476099 2804 server.go:479] "Adding debug handlers to kubelet server" Jan 20 01:41:53.477922 kubelet[2804]: I0120 01:41:53.477893 2804 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 01:41:53.480421 kubelet[2804]: I0120 01:41:53.479088 2804 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 01:41:53.480787 kubelet[2804]: E0120 01:41:53.480642 2804 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.33:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.33:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-0046389dc1.188c4ce81db89534 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-0046389dc1,UID:ci-4081.3.6-n-0046389dc1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-0046389dc1,},FirstTimestamp:2026-01-20 01:41:53.464522036 +0000 UTC m=+0.515978795,LastTimestamp:2026-01-20 01:41:53.464522036 +0000 UTC m=+0.515978795,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-0046389dc1,}" Jan 20 01:41:53.480787 kubelet[2804]: E0120 01:41:53.479227 2804 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-0046389dc1\" not found" Jan 20 01:41:53.480959 kubelet[2804]: I0120 01:41:53.480939 2804 factory.go:221] Registration of the systemd container factory successfully Jan 20 01:41:53.481043 kubelet[2804]: I0120 01:41:53.481026 2804 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 01:41:53.481602 kubelet[2804]: I0120 01:41:53.479102 2804 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 01:41:53.481663 kubelet[2804]: I0120 01:41:53.481651 2804 reconciler.go:26] "Reconciler: start to sync state" Jan 20 01:41:53.483007 kubelet[2804]: E0120 01:41:53.482914 2804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-0046389dc1?timeout=10s\": dial tcp 10.200.20.33:6443: connect: connection refused" interval="200ms" Jan 20 01:41:53.483694 kubelet[2804]: W0120 01:41:53.483660 2804 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 20 01:41:53.483787 kubelet[2804]: E0120 01:41:53.483703 2804 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:41:53.484569 kubelet[2804]: I0120 01:41:53.484323 2804 factory.go:221] Registration of the containerd container factory successfully Jan 20 01:41:53.497306 kubelet[2804]: E0120 01:41:53.497288 2804 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 01:41:53.504370 kubelet[2804]: I0120 01:41:53.504347 2804 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 01:41:53.504370 kubelet[2804]: I0120 01:41:53.504362 2804 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 01:41:53.504450 kubelet[2804]: I0120 01:41:53.504378 2804 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:41:53.508008 kubelet[2804]: I0120 01:41:53.507964 2804 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 01:41:53.508758 kubelet[2804]: I0120 01:41:53.508733 2804 policy_none.go:49] "None policy: Start" Jan 20 01:41:53.508758 kubelet[2804]: I0120 01:41:53.508753 2804 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 01:41:53.508758 kubelet[2804]: I0120 01:41:53.508764 2804 state_mem.go:35] "Initializing new in-memory state store" Jan 20 01:41:53.509186 kubelet[2804]: I0120 01:41:53.509155 2804 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 01:41:53.509186 kubelet[2804]: I0120 01:41:53.509182 2804 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 01:41:53.509265 kubelet[2804]: I0120 01:41:53.509202 2804 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 01:41:53.509265 kubelet[2804]: I0120 01:41:53.509208 2804 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 01:41:53.509265 kubelet[2804]: E0120 01:41:53.509241 2804 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 01:41:53.511302 kubelet[2804]: W0120 01:41:53.511125 2804 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 20 01:41:53.511302 kubelet[2804]: E0120 01:41:53.511172 2804 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:41:53.517550 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 01:41:53.527358 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 01:41:53.530735 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 01:41:53.540939 kubelet[2804]: I0120 01:41:53.540537 2804 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 01:41:53.540939 kubelet[2804]: I0120 01:41:53.540701 2804 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 01:41:53.540939 kubelet[2804]: I0120 01:41:53.540712 2804 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 01:41:53.540939 kubelet[2804]: I0120 01:41:53.540876 2804 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 01:41:53.542133 kubelet[2804]: E0120 01:41:53.542118 2804 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 01:41:53.542246 kubelet[2804]: E0120 01:41:53.542233 2804 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-0046389dc1\" not found" Jan 20 01:41:53.619006 systemd[1]: Created slice kubepods-burstable-pod8cd03652c36377012344f5f8d7d9a19d.slice - libcontainer container kubepods-burstable-pod8cd03652c36377012344f5f8d7d9a19d.slice. Jan 20 01:41:53.628836 kubelet[2804]: E0120 01:41:53.628570 2804 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-0046389dc1\" not found" node="ci-4081.3.6-n-0046389dc1" Jan 20 01:41:53.631408 systemd[1]: Created slice kubepods-burstable-poded709d220d5a39664a9154da59fd3c7c.slice - libcontainer container kubepods-burstable-poded709d220d5a39664a9154da59fd3c7c.slice. Jan 20 01:41:53.633445 kubelet[2804]: E0120 01:41:53.633429 2804 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-0046389dc1\" not found" node="ci-4081.3.6-n-0046389dc1" Jan 20 01:41:53.635911 systemd[1]: Created slice kubepods-burstable-pod6e5777402846190ab9fa332d2306c92e.slice - libcontainer container kubepods-burstable-pod6e5777402846190ab9fa332d2306c92e.slice. Jan 20 01:41:53.637645 kubelet[2804]: E0120 01:41:53.637627 2804 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-0046389dc1\" not found" node="ci-4081.3.6-n-0046389dc1" Jan 20 01:41:53.641983 kubelet[2804]: I0120 01:41:53.641963 2804 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-0046389dc1" Jan 20 01:41:53.642314 kubelet[2804]: E0120 01:41:53.642288 2804 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.33:6443/api/v1/nodes\": dial tcp 10.200.20.33:6443: connect: connection refused" node="ci-4081.3.6-n-0046389dc1" Jan 20 01:41:53.683887 kubelet[2804]: E0120 01:41:53.683794 2804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-0046389dc1?timeout=10s\": dial tcp 10.200.20.33:6443: connect: connection refused" interval="400ms" Jan 20 01:41:53.783355 kubelet[2804]: I0120 01:41:53.783105 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8cd03652c36377012344f5f8d7d9a19d-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-0046389dc1\" (UID: \"8cd03652c36377012344f5f8d7d9a19d\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-0046389dc1" Jan 20 01:41:53.783355 kubelet[2804]: I0120 01:41:53.783145 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ed709d220d5a39664a9154da59fd3c7c-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-0046389dc1\" (UID: \"ed709d220d5a39664a9154da59fd3c7c\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-0046389dc1" Jan 20 01:41:53.783355 kubelet[2804]: I0120 01:41:53.783163 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ed709d220d5a39664a9154da59fd3c7c-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-0046389dc1\" (UID: \"ed709d220d5a39664a9154da59fd3c7c\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-0046389dc1" Jan 20 01:41:53.783355 kubelet[2804]: I0120 01:41:53.783180 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e5777402846190ab9fa332d2306c92e-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-0046389dc1\" (UID: \"6e5777402846190ab9fa332d2306c92e\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-0046389dc1" Jan 20 01:41:53.783355 kubelet[2804]: I0120 01:41:53.783194 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8cd03652c36377012344f5f8d7d9a19d-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-0046389dc1\" (UID: \"8cd03652c36377012344f5f8d7d9a19d\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-0046389dc1" Jan 20 01:41:53.783549 kubelet[2804]: I0120 01:41:53.783207 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ed709d220d5a39664a9154da59fd3c7c-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-0046389dc1\" (UID: \"ed709d220d5a39664a9154da59fd3c7c\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-0046389dc1" Jan 20 01:41:53.783549 kubelet[2804]: I0120 01:41:53.783222 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ed709d220d5a39664a9154da59fd3c7c-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-0046389dc1\" (UID: \"ed709d220d5a39664a9154da59fd3c7c\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-0046389dc1" Jan 20 01:41:53.783549 kubelet[2804]: I0120 01:41:53.783239 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ed709d220d5a39664a9154da59fd3c7c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-0046389dc1\" (UID: \"ed709d220d5a39664a9154da59fd3c7c\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-0046389dc1" Jan 20 01:41:53.783549 kubelet[2804]: I0120 01:41:53.783267 2804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8cd03652c36377012344f5f8d7d9a19d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-0046389dc1\" (UID: \"8cd03652c36377012344f5f8d7d9a19d\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-0046389dc1" Jan 20 01:41:53.844832 kubelet[2804]: I0120 01:41:53.844505 2804 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-0046389dc1" Jan 20 01:41:53.844832 kubelet[2804]: E0120 01:41:53.844813 2804 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.33:6443/api/v1/nodes\": dial tcp 10.200.20.33:6443: connect: connection refused" node="ci-4081.3.6-n-0046389dc1" Jan 20 01:41:53.930157 containerd[1767]: time="2026-01-20T01:41:53.930009438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-0046389dc1,Uid:8cd03652c36377012344f5f8d7d9a19d,Namespace:kube-system,Attempt:0,}" Jan 20 01:41:53.935247 containerd[1767]: time="2026-01-20T01:41:53.935004117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-0046389dc1,Uid:ed709d220d5a39664a9154da59fd3c7c,Namespace:kube-system,Attempt:0,}" Jan 20 01:41:53.939017 containerd[1767]: time="2026-01-20T01:41:53.938982437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-0046389dc1,Uid:6e5777402846190ab9fa332d2306c92e,Namespace:kube-system,Attempt:0,}" Jan 20 01:41:54.085012 kubelet[2804]: E0120 01:41:54.084972 2804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-0046389dc1?timeout=10s\": dial tcp 10.200.20.33:6443: connect: connection refused" interval="800ms" Jan 20 01:41:54.246936 kubelet[2804]: I0120 01:41:54.246907 2804 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-0046389dc1" Jan 20 01:41:54.247250 kubelet[2804]: E0120 01:41:54.247219 2804 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.33:6443/api/v1/nodes\": dial tcp 10.200.20.33:6443: connect: connection refused" node="ci-4081.3.6-n-0046389dc1" Jan 20 01:41:54.751001 kubelet[2804]: W0120 01:41:54.750932 2804 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 20 01:41:54.751001 kubelet[2804]: E0120 01:41:54.750972 2804 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:41:54.797956 kubelet[2804]: W0120 01:41:54.797894 2804 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-0046389dc1&limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 20 01:41:54.798053 kubelet[2804]: E0120 01:41:54.797963 2804 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-0046389dc1&limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:41:54.840678 kubelet[2804]: W0120 01:41:54.840619 2804 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 20 01:41:54.840678 kubelet[2804]: E0120 01:41:54.840654 2804 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:41:54.885337 kubelet[2804]: E0120 01:41:54.885303 2804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-0046389dc1?timeout=10s\": dial tcp 10.200.20.33:6443: connect: connection refused" interval="1.6s" Jan 20 01:41:54.902787 kubelet[2804]: W0120 01:41:54.902730 2804 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.33:6443: connect: connection refused Jan 20 01:41:54.902787 kubelet[2804]: E0120 01:41:54.902763 2804 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:41:55.049783 kubelet[2804]: I0120 01:41:55.049195 2804 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-0046389dc1" Jan 20 01:41:55.050003 kubelet[2804]: E0120 01:41:55.049965 2804 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.33:6443/api/v1/nodes\": dial tcp 10.200.20.33:6443: connect: connection refused" node="ci-4081.3.6-n-0046389dc1" Jan 20 01:41:55.121656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2543780357.mount: Deactivated successfully. Jan 20 01:41:55.136103 containerd[1767]: time="2026-01-20T01:41:55.136061734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:41:55.144641 containerd[1767]: time="2026-01-20T01:41:55.144614053Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 20 01:41:55.146543 containerd[1767]: time="2026-01-20T01:41:55.146510452Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:41:55.149075 containerd[1767]: time="2026-01-20T01:41:55.149053852Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 20 01:41:55.151324 containerd[1767]: time="2026-01-20T01:41:55.151290732Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:41:55.154404 containerd[1767]: time="2026-01-20T01:41:55.154373531Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:41:55.155528 containerd[1767]: time="2026-01-20T01:41:55.155488011Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 20 01:41:55.157967 containerd[1767]: time="2026-01-20T01:41:55.157913011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:41:55.159174 containerd[1767]: time="2026-01-20T01:41:55.158729931Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.223667174s" Jan 20 01:41:55.162529 containerd[1767]: time="2026-01-20T01:41:55.162481450Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.223445733s" Jan 20 01:41:55.173399 containerd[1767]: time="2026-01-20T01:41:55.173346769Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.243256531s" Jan 20 01:41:55.586841 kubelet[2804]: E0120 01:41:55.586801 2804 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.33:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:41:55.694006 kubelet[2804]: E0120 01:41:55.693886 2804 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.33:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.33:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-0046389dc1.188c4ce81db89534 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-0046389dc1,UID:ci-4081.3.6-n-0046389dc1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-0046389dc1,},FirstTimestamp:2026-01-20 01:41:53.464522036 +0000 UTC m=+0.515978795,LastTimestamp:2026-01-20 01:41:53.464522036 +0000 UTC m=+0.515978795,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-0046389dc1,}" Jan 20 01:41:55.712511 containerd[1767]: time="2026-01-20T01:41:55.712356054Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:41:55.712715 containerd[1767]: time="2026-01-20T01:41:55.712640054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:41:55.712975 containerd[1767]: time="2026-01-20T01:41:55.712882854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:41:55.713651 containerd[1767]: time="2026-01-20T01:41:55.713553293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:41:55.715925 containerd[1767]: time="2026-01-20T01:41:55.715647013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:41:55.715925 containerd[1767]: time="2026-01-20T01:41:55.715691573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:41:55.715925 containerd[1767]: time="2026-01-20T01:41:55.715710293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:41:55.715925 containerd[1767]: time="2026-01-20T01:41:55.715776933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:41:55.717426 containerd[1767]: time="2026-01-20T01:41:55.717193493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:41:55.717426 containerd[1767]: time="2026-01-20T01:41:55.717238493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:41:55.717426 containerd[1767]: time="2026-01-20T01:41:55.717253013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:41:55.717426 containerd[1767]: time="2026-01-20T01:41:55.717318173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:41:55.737151 systemd[1]: Started cri-containerd-ece2c1ec6d99c304bbb1e6d9b3ee7b07ace8cc22290e8f5471f63f0b20312c89.scope - libcontainer container ece2c1ec6d99c304bbb1e6d9b3ee7b07ace8cc22290e8f5471f63f0b20312c89. Jan 20 01:41:55.741338 systemd[1]: Started cri-containerd-060dd2251e4080e0474f3266c86ffbc88e831b9422ac7b568e705d1118b412ac.scope - libcontainer container 060dd2251e4080e0474f3266c86ffbc88e831b9422ac7b568e705d1118b412ac. Jan 20 01:41:55.753331 systemd[1]: Started cri-containerd-bcc17f5e7d006f672202ba820743b84c97a1b978dc66e285c05dd2f24581a7fd.scope - libcontainer container bcc17f5e7d006f672202ba820743b84c97a1b978dc66e285c05dd2f24581a7fd. Jan 20 01:41:55.790869 containerd[1767]: time="2026-01-20T01:41:55.790823083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-0046389dc1,Uid:ed709d220d5a39664a9154da59fd3c7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ece2c1ec6d99c304bbb1e6d9b3ee7b07ace8cc22290e8f5471f63f0b20312c89\"" Jan 20 01:41:55.796185 containerd[1767]: time="2026-01-20T01:41:55.796069122Z" level=info msg="CreateContainer within sandbox \"ece2c1ec6d99c304bbb1e6d9b3ee7b07ace8cc22290e8f5471f63f0b20312c89\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 01:41:55.796765 containerd[1767]: time="2026-01-20T01:41:55.796285402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-0046389dc1,Uid:8cd03652c36377012344f5f8d7d9a19d,Namespace:kube-system,Attempt:0,} returns sandbox id \"bcc17f5e7d006f672202ba820743b84c97a1b978dc66e285c05dd2f24581a7fd\"" Jan 20 01:41:55.800776 containerd[1767]: time="2026-01-20T01:41:55.800527601Z" level=info msg="CreateContainer within sandbox \"bcc17f5e7d006f672202ba820743b84c97a1b978dc66e285c05dd2f24581a7fd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 01:41:55.806514 containerd[1767]: time="2026-01-20T01:41:55.806490080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-0046389dc1,Uid:6e5777402846190ab9fa332d2306c92e,Namespace:kube-system,Attempt:0,} returns sandbox id \"060dd2251e4080e0474f3266c86ffbc88e831b9422ac7b568e705d1118b412ac\"" Jan 20 01:41:55.808870 containerd[1767]: time="2026-01-20T01:41:55.808845200Z" level=info msg="CreateContainer within sandbox \"060dd2251e4080e0474f3266c86ffbc88e831b9422ac7b568e705d1118b412ac\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 01:41:55.851994 containerd[1767]: time="2026-01-20T01:41:55.851245074Z" level=info msg="CreateContainer within sandbox \"ece2c1ec6d99c304bbb1e6d9b3ee7b07ace8cc22290e8f5471f63f0b20312c89\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d2e77fe03af19092acca153580ea95efc03e7d48851a3f6f443af516e2c296b4\"" Jan 20 01:41:55.852191 containerd[1767]: time="2026-01-20T01:41:55.852165194Z" level=info msg="StartContainer for \"d2e77fe03af19092acca153580ea95efc03e7d48851a3f6f443af516e2c296b4\"" Jan 20 01:41:55.854591 containerd[1767]: time="2026-01-20T01:41:55.854455794Z" level=info msg="CreateContainer within sandbox \"060dd2251e4080e0474f3266c86ffbc88e831b9422ac7b568e705d1118b412ac\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8879c9b52b7ce307e5ac93ec325907f63e6ea4b46b4e1ab85b5925f35df686af\"" Jan 20 01:41:55.855149 containerd[1767]: time="2026-01-20T01:41:55.855073914Z" level=info msg="StartContainer for \"8879c9b52b7ce307e5ac93ec325907f63e6ea4b46b4e1ab85b5925f35df686af\"" Jan 20 01:41:55.857061 containerd[1767]: time="2026-01-20T01:41:55.856988673Z" level=info msg="CreateContainer within sandbox \"bcc17f5e7d006f672202ba820743b84c97a1b978dc66e285c05dd2f24581a7fd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4fa2a3b01c3c5d9eebd1e27b65c37ee87861a4712a434075b25380dce3cb434e\"" Jan 20 01:41:55.857503 containerd[1767]: time="2026-01-20T01:41:55.857400873Z" level=info msg="StartContainer for \"4fa2a3b01c3c5d9eebd1e27b65c37ee87861a4712a434075b25380dce3cb434e\"" Jan 20 01:41:55.885111 systemd[1]: Started cri-containerd-4fa2a3b01c3c5d9eebd1e27b65c37ee87861a4712a434075b25380dce3cb434e.scope - libcontainer container 4fa2a3b01c3c5d9eebd1e27b65c37ee87861a4712a434075b25380dce3cb434e. Jan 20 01:41:55.894030 systemd[1]: Started cri-containerd-d2e77fe03af19092acca153580ea95efc03e7d48851a3f6f443af516e2c296b4.scope - libcontainer container d2e77fe03af19092acca153580ea95efc03e7d48851a3f6f443af516e2c296b4. Jan 20 01:41:55.897156 systemd[1]: Started cri-containerd-8879c9b52b7ce307e5ac93ec325907f63e6ea4b46b4e1ab85b5925f35df686af.scope - libcontainer container 8879c9b52b7ce307e5ac93ec325907f63e6ea4b46b4e1ab85b5925f35df686af. Jan 20 01:41:55.942676 containerd[1767]: time="2026-01-20T01:41:55.941886222Z" level=info msg="StartContainer for \"d2e77fe03af19092acca153580ea95efc03e7d48851a3f6f443af516e2c296b4\" returns successfully" Jan 20 01:41:55.946515 containerd[1767]: time="2026-01-20T01:41:55.946377461Z" level=info msg="StartContainer for \"4fa2a3b01c3c5d9eebd1e27b65c37ee87861a4712a434075b25380dce3cb434e\" returns successfully" Jan 20 01:41:55.960216 containerd[1767]: time="2026-01-20T01:41:55.960176819Z" level=info msg="StartContainer for \"8879c9b52b7ce307e5ac93ec325907f63e6ea4b46b4e1ab85b5925f35df686af\" returns successfully" Jan 20 01:41:56.520761 kubelet[2804]: E0120 01:41:56.520591 2804 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-0046389dc1\" not found" node="ci-4081.3.6-n-0046389dc1" Jan 20 01:41:56.524199 kubelet[2804]: E0120 01:41:56.524066 2804 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-0046389dc1\" not found" node="ci-4081.3.6-n-0046389dc1" Jan 20 01:41:56.525573 kubelet[2804]: E0120 01:41:56.525461 2804 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-0046389dc1\" not found" node="ci-4081.3.6-n-0046389dc1" Jan 20 01:41:56.653104 kubelet[2804]: I0120 01:41:56.653079 2804 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-0046389dc1" Jan 20 01:41:57.528081 kubelet[2804]: E0120 01:41:57.527894 2804 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-0046389dc1\" not found" node="ci-4081.3.6-n-0046389dc1" Jan 20 01:41:57.528557 kubelet[2804]: E0120 01:41:57.528443 2804 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-0046389dc1\" not found" node="ci-4081.3.6-n-0046389dc1" Jan 20 01:41:58.003171 kubelet[2804]: E0120 01:41:58.003105 2804 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-0046389dc1\" not found" node="ci-4081.3.6-n-0046389dc1" Jan 20 01:41:58.088535 kubelet[2804]: I0120 01:41:58.088494 2804 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-0046389dc1" Jan 20 01:41:58.180486 kubelet[2804]: I0120 01:41:58.180450 2804 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-0046389dc1" Jan 20 01:41:58.203195 kubelet[2804]: E0120 01:41:58.203158 2804 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-0046389dc1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-0046389dc1" Jan 20 01:41:58.203195 kubelet[2804]: I0120 01:41:58.203188 2804 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-0046389dc1" Jan 20 01:41:58.205758 kubelet[2804]: E0120 01:41:58.205727 2804 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-0046389dc1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-0046389dc1" Jan 20 01:41:58.205758 kubelet[2804]: I0120 01:41:58.205756 2804 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-0046389dc1" Jan 20 01:41:58.207855 kubelet[2804]: E0120 01:41:58.207828 2804 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-0046389dc1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-0046389dc1" Jan 20 01:41:58.472245 kubelet[2804]: I0120 01:41:58.472146 2804 apiserver.go:52] "Watching apiserver" Jan 20 01:41:58.482083 kubelet[2804]: I0120 01:41:58.482054 2804 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 01:41:58.526604 kubelet[2804]: I0120 01:41:58.526583 2804 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-0046389dc1" Jan 20 01:41:58.529568 kubelet[2804]: E0120 01:41:58.529545 2804 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-0046389dc1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-0046389dc1" Jan 20 01:41:58.568742 kubelet[2804]: I0120 01:41:58.568719 2804 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-0046389dc1" Jan 20 01:41:58.570436 kubelet[2804]: E0120 01:41:58.570410 2804 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-0046389dc1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-0046389dc1" Jan 20 01:41:59.878547 kubelet[2804]: I0120 01:41:59.878518 2804 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-0046389dc1" Jan 20 01:41:59.886153 kubelet[2804]: W0120 01:41:59.886130 2804 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 20 01:42:00.005685 systemd[1]: Reloading requested from client PID 3074 ('systemctl') (unit session-9.scope)... Jan 20 01:42:00.005698 systemd[1]: Reloading... Jan 20 01:42:00.091954 zram_generator::config[3117]: No configuration found. Jan 20 01:42:00.198811 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 01:42:00.302928 systemd[1]: Reloading finished in 296 ms. Jan 20 01:42:00.334966 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:42:00.350879 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 01:42:00.351106 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:42:00.356181 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:42:00.453120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:42:00.461173 (kubelet)[3178]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 01:42:00.499303 kubelet[3178]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:42:00.501516 kubelet[3178]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 01:42:00.501516 kubelet[3178]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:42:00.501516 kubelet[3178]: I0120 01:42:00.499704 3178 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 01:42:00.504842 kubelet[3178]: I0120 01:42:00.504820 3178 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 01:42:00.504965 kubelet[3178]: I0120 01:42:00.504955 3178 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 01:42:00.505227 kubelet[3178]: I0120 01:42:00.505213 3178 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 01:42:00.507161 kubelet[3178]: I0120 01:42:00.507134 3178 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 20 01:42:00.509956 kubelet[3178]: I0120 01:42:00.509782 3178 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 01:42:00.513039 kubelet[3178]: E0120 01:42:00.513011 3178 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 20 01:42:00.513039 kubelet[3178]: I0120 01:42:00.513039 3178 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 20 01:42:00.515890 kubelet[3178]: I0120 01:42:00.515871 3178 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 01:42:00.516111 kubelet[3178]: I0120 01:42:00.516086 3178 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 01:42:00.516266 kubelet[3178]: I0120 01:42:00.516110 3178 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-0046389dc1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 01:42:00.516349 kubelet[3178]: I0120 01:42:00.516275 3178 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 01:42:00.516349 kubelet[3178]: I0120 01:42:00.516284 3178 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 01:42:00.516349 kubelet[3178]: I0120 01:42:00.516327 3178 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:42:00.516449 kubelet[3178]: I0120 01:42:00.516434 3178 kubelet.go:446] "Attempting to sync node with API server" Jan 20 01:42:00.516480 kubelet[3178]: I0120 01:42:00.516451 3178 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 01:42:00.516480 kubelet[3178]: I0120 01:42:00.516467 3178 kubelet.go:352] "Adding apiserver pod source" Jan 20 01:42:00.519917 kubelet[3178]: I0120 01:42:00.516476 3178 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 01:42:00.523000 kubelet[3178]: I0120 01:42:00.522976 3178 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 20 01:42:00.523463 kubelet[3178]: I0120 01:42:00.523444 3178 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 01:42:00.523818 kubelet[3178]: I0120 01:42:00.523799 3178 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 01:42:00.523853 kubelet[3178]: I0120 01:42:00.523829 3178 server.go:1287] "Started kubelet" Jan 20 01:42:00.535905 kubelet[3178]: I0120 01:42:00.533806 3178 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 01:42:00.540369 kubelet[3178]: I0120 01:42:00.539637 3178 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 01:42:00.541275 kubelet[3178]: I0120 01:42:00.541122 3178 server.go:479] "Adding debug handlers to kubelet server" Jan 20 01:42:00.543443 kubelet[3178]: I0120 01:42:00.542927 3178 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 01:42:00.543443 kubelet[3178]: I0120 01:42:00.543115 3178 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 01:42:00.558372 kubelet[3178]: I0120 01:42:00.544441 3178 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 01:42:00.558372 kubelet[3178]: I0120 01:42:00.547407 3178 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 01:42:00.559002 kubelet[3178]: I0120 01:42:00.547425 3178 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 01:42:00.559002 kubelet[3178]: E0120 01:42:00.547536 3178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-0046389dc1\" not found" Jan 20 01:42:00.560930 kubelet[3178]: I0120 01:42:00.560542 3178 reconciler.go:26] "Reconciler: start to sync state" Jan 20 01:42:00.566053 kubelet[3178]: I0120 01:42:00.566022 3178 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 01:42:00.568056 kubelet[3178]: I0120 01:42:00.568033 3178 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 01:42:00.568056 kubelet[3178]: I0120 01:42:00.568055 3178 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 01:42:00.568147 kubelet[3178]: I0120 01:42:00.568073 3178 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 01:42:00.568147 kubelet[3178]: I0120 01:42:00.568079 3178 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 01:42:00.568147 kubelet[3178]: E0120 01:42:00.568114 3178 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 01:42:00.571061 kubelet[3178]: I0120 01:42:00.571044 3178 factory.go:221] Registration of the systemd container factory successfully Jan 20 01:42:00.572461 kubelet[3178]: I0120 01:42:00.571958 3178 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 01:42:00.575431 kubelet[3178]: I0120 01:42:00.575407 3178 factory.go:221] Registration of the containerd container factory successfully Jan 20 01:42:00.578405 kubelet[3178]: E0120 01:42:00.578172 3178 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 01:42:00.633662 kubelet[3178]: I0120 01:42:00.633618 3178 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 01:42:00.633917 kubelet[3178]: I0120 01:42:00.633809 3178 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 01:42:00.633917 kubelet[3178]: I0120 01:42:00.633833 3178 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:42:00.634108 kubelet[3178]: I0120 01:42:00.634093 3178 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 01:42:00.634791 kubelet[3178]: I0120 01:42:00.634169 3178 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 01:42:00.634791 kubelet[3178]: I0120 01:42:00.634195 3178 policy_none.go:49] "None policy: Start" Jan 20 01:42:00.634791 kubelet[3178]: I0120 01:42:00.634205 3178 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 01:42:00.634791 kubelet[3178]: I0120 01:42:00.634215 3178 state_mem.go:35] "Initializing new in-memory state store" Jan 20 01:42:00.634791 kubelet[3178]: I0120 01:42:00.634314 3178 state_mem.go:75] "Updated machine memory state" Jan 20 01:42:00.637781 kubelet[3178]: I0120 01:42:00.637757 3178 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 01:42:00.637932 kubelet[3178]: I0120 01:42:00.637916 3178 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 01:42:00.637978 kubelet[3178]: I0120 01:42:00.637934 3178 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 01:42:00.638512 kubelet[3178]: I0120 01:42:00.638418 3178 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 01:42:00.641054 kubelet[3178]: E0120 01:42:00.640811 3178 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 01:42:00.668753 kubelet[3178]: I0120 01:42:00.668713 3178 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-0046389dc1" Jan 20 01:42:00.669512 kubelet[3178]: I0120 01:42:00.669056 3178 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-0046389dc1" Jan 20 01:42:00.669512 kubelet[3178]: I0120 01:42:00.669217 3178 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-0046389dc1" Jan 20 01:42:00.676942 kubelet[3178]: W0120 01:42:00.676917 3178 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 20 01:42:00.681515 kubelet[3178]: W0120 01:42:00.681494 3178 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 20 01:42:00.682460 kubelet[3178]: W0120 01:42:00.682436 3178 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 20 01:42:00.682541 kubelet[3178]: E0120 01:42:00.682482 3178 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-0046389dc1\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-0046389dc1" Jan 20 01:42:00.741292 kubelet[3178]: I0120 01:42:00.741261 3178 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:00.759121 kubelet[3178]: I0120 01:42:00.759093 3178 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:00.759248 kubelet[3178]: I0120 01:42:00.759165 3178 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:00.763219 kubelet[3178]: I0120 01:42:00.763180 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ed709d220d5a39664a9154da59fd3c7c-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-0046389dc1\" (UID: \"ed709d220d5a39664a9154da59fd3c7c\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-0046389dc1" Jan 20 01:42:00.763321 kubelet[3178]: I0120 01:42:00.763216 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e5777402846190ab9fa332d2306c92e-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-0046389dc1\" (UID: \"6e5777402846190ab9fa332d2306c92e\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-0046389dc1" Jan 20 01:42:00.763321 kubelet[3178]: I0120 01:42:00.763274 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8cd03652c36377012344f5f8d7d9a19d-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-0046389dc1\" (UID: \"8cd03652c36377012344f5f8d7d9a19d\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-0046389dc1" Jan 20 01:42:00.763321 kubelet[3178]: I0120 01:42:00.763291 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8cd03652c36377012344f5f8d7d9a19d-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-0046389dc1\" (UID: \"8cd03652c36377012344f5f8d7d9a19d\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-0046389dc1" Jan 20 01:42:00.763394 kubelet[3178]: I0120 01:42:00.763318 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8cd03652c36377012344f5f8d7d9a19d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-0046389dc1\" (UID: \"8cd03652c36377012344f5f8d7d9a19d\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-0046389dc1" Jan 20 01:42:00.763394 kubelet[3178]: I0120 01:42:00.763337 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ed709d220d5a39664a9154da59fd3c7c-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-0046389dc1\" (UID: \"ed709d220d5a39664a9154da59fd3c7c\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-0046389dc1" Jan 20 01:42:00.763394 kubelet[3178]: I0120 01:42:00.763354 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ed709d220d5a39664a9154da59fd3c7c-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-0046389dc1\" (UID: \"ed709d220d5a39664a9154da59fd3c7c\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-0046389dc1" Jan 20 01:42:00.763394 kubelet[3178]: I0120 01:42:00.763370 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ed709d220d5a39664a9154da59fd3c7c-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-0046389dc1\" (UID: \"ed709d220d5a39664a9154da59fd3c7c\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-0046389dc1" Jan 20 01:42:00.763487 kubelet[3178]: I0120 01:42:00.763420 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ed709d220d5a39664a9154da59fd3c7c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-0046389dc1\" (UID: \"ed709d220d5a39664a9154da59fd3c7c\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-0046389dc1" Jan 20 01:42:01.520884 kubelet[3178]: I0120 01:42:01.520646 3178 apiserver.go:52] "Watching apiserver" Jan 20 01:42:01.558696 kubelet[3178]: I0120 01:42:01.558650 3178 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 01:42:01.575017 kubelet[3178]: I0120 01:42:01.574849 3178 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-0046389dc1" podStartSLOduration=2.574831958 podStartE2EDuration="2.574831958s" podCreationTimestamp="2026-01-20 01:41:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:42:01.574224838 +0000 UTC m=+1.108174087" watchObservedRunningTime="2026-01-20 01:42:01.574831958 +0000 UTC m=+1.108781207" Jan 20 01:42:01.586166 kubelet[3178]: I0120 01:42:01.586125 3178 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-0046389dc1" Jan 20 01:42:01.604995 kubelet[3178]: I0120 01:42:01.604884 3178 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-0046389dc1" podStartSLOduration=1.604866714 podStartE2EDuration="1.604866714s" podCreationTimestamp="2026-01-20 01:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:42:01.588932236 +0000 UTC m=+1.122881525" watchObservedRunningTime="2026-01-20 01:42:01.604866714 +0000 UTC m=+1.138815923" Jan 20 01:42:01.605226 kubelet[3178]: W0120 01:42:01.605115 3178 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 20 01:42:01.605226 kubelet[3178]: E0120 01:42:01.605153 3178 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-0046389dc1\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-0046389dc1" Jan 20 01:42:01.619347 kubelet[3178]: I0120 01:42:01.618942 3178 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-0046389dc1" podStartSLOduration=1.618927352 podStartE2EDuration="1.618927352s" podCreationTimestamp="2026-01-20 01:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:42:01.605334754 +0000 UTC m=+1.139284003" watchObservedRunningTime="2026-01-20 01:42:01.618927352 +0000 UTC m=+1.152876601" Jan 20 01:42:04.777009 kubelet[3178]: I0120 01:42:04.776973 3178 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 01:42:04.777354 containerd[1767]: time="2026-01-20T01:42:04.777261091Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 01:42:04.777521 kubelet[3178]: I0120 01:42:04.777422 3178 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 01:42:05.731071 systemd[1]: Created slice kubepods-besteffort-podf1ccfd4e_f686_49bc_9ef0_9e8174cad640.slice - libcontainer container kubepods-besteffort-podf1ccfd4e_f686_49bc_9ef0_9e8174cad640.slice. Jan 20 01:42:05.790769 kubelet[3178]: I0120 01:42:05.790725 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f1ccfd4e-f686-49bc-9ef0-9e8174cad640-kube-proxy\") pod \"kube-proxy-h8j7h\" (UID: \"f1ccfd4e-f686-49bc-9ef0-9e8174cad640\") " pod="kube-system/kube-proxy-h8j7h" Jan 20 01:42:05.790769 kubelet[3178]: I0120 01:42:05.790766 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1ccfd4e-f686-49bc-9ef0-9e8174cad640-xtables-lock\") pod \"kube-proxy-h8j7h\" (UID: \"f1ccfd4e-f686-49bc-9ef0-9e8174cad640\") " pod="kube-system/kube-proxy-h8j7h" Jan 20 01:42:05.791221 kubelet[3178]: I0120 01:42:05.790785 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mt9l\" (UniqueName: \"kubernetes.io/projected/f1ccfd4e-f686-49bc-9ef0-9e8174cad640-kube-api-access-4mt9l\") pod \"kube-proxy-h8j7h\" (UID: \"f1ccfd4e-f686-49bc-9ef0-9e8174cad640\") " pod="kube-system/kube-proxy-h8j7h" Jan 20 01:42:05.791221 kubelet[3178]: I0120 01:42:05.790803 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1ccfd4e-f686-49bc-9ef0-9e8174cad640-lib-modules\") pod \"kube-proxy-h8j7h\" (UID: \"f1ccfd4e-f686-49bc-9ef0-9e8174cad640\") " pod="kube-system/kube-proxy-h8j7h" Jan 20 01:42:05.906128 systemd[1]: Created slice kubepods-besteffort-podf93e4094_2bb5_48f9_8ac8_c7004534c28b.slice - libcontainer container kubepods-besteffort-podf93e4094_2bb5_48f9_8ac8_c7004534c28b.slice. Jan 20 01:42:05.992099 kubelet[3178]: I0120 01:42:05.991892 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f93e4094-2bb5-48f9-8ac8-c7004534c28b-var-lib-calico\") pod \"tigera-operator-7dcd859c48-wmc7z\" (UID: \"f93e4094-2bb5-48f9-8ac8-c7004534c28b\") " pod="tigera-operator/tigera-operator-7dcd859c48-wmc7z" Jan 20 01:42:05.992099 kubelet[3178]: I0120 01:42:05.991957 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmf49\" (UniqueName: \"kubernetes.io/projected/f93e4094-2bb5-48f9-8ac8-c7004534c28b-kube-api-access-pmf49\") pod \"tigera-operator-7dcd859c48-wmc7z\" (UID: \"f93e4094-2bb5-48f9-8ac8-c7004534c28b\") " pod="tigera-operator/tigera-operator-7dcd859c48-wmc7z" Jan 20 01:42:06.038267 containerd[1767]: time="2026-01-20T01:42:06.038215424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h8j7h,Uid:f1ccfd4e-f686-49bc-9ef0-9e8174cad640,Namespace:kube-system,Attempt:0,}" Jan 20 01:42:06.070822 containerd[1767]: time="2026-01-20T01:42:06.070716273Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:42:06.070822 containerd[1767]: time="2026-01-20T01:42:06.070773393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:42:06.070822 containerd[1767]: time="2026-01-20T01:42:06.070791833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:06.071218 containerd[1767]: time="2026-01-20T01:42:06.071098713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:06.085466 systemd[1]: run-containerd-runc-k8s.io-550d1d6b9b42b8386ee92b89f9b791919f67cd00ba9ffad5db169fd816237e23-runc.wIw1Lh.mount: Deactivated successfully. Jan 20 01:42:06.093197 systemd[1]: Started cri-containerd-550d1d6b9b42b8386ee92b89f9b791919f67cd00ba9ffad5db169fd816237e23.scope - libcontainer container 550d1d6b9b42b8386ee92b89f9b791919f67cd00ba9ffad5db169fd816237e23. Jan 20 01:42:06.115487 containerd[1767]: time="2026-01-20T01:42:06.115447485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h8j7h,Uid:f1ccfd4e-f686-49bc-9ef0-9e8174cad640,Namespace:kube-system,Attempt:0,} returns sandbox id \"550d1d6b9b42b8386ee92b89f9b791919f67cd00ba9ffad5db169fd816237e23\"" Jan 20 01:42:06.122061 containerd[1767]: time="2026-01-20T01:42:06.121655287Z" level=info msg="CreateContainer within sandbox \"550d1d6b9b42b8386ee92b89f9b791919f67cd00ba9ffad5db169fd816237e23\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 01:42:06.159262 containerd[1767]: time="2026-01-20T01:42:06.159222016Z" level=info msg="CreateContainer within sandbox \"550d1d6b9b42b8386ee92b89f9b791919f67cd00ba9ffad5db169fd816237e23\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"16cab56c25544599980233fcffe52c75c88bbe1f66d21cc52ba8aee4be594830\"" Jan 20 01:42:06.160122 containerd[1767]: time="2026-01-20T01:42:06.160066577Z" level=info msg="StartContainer for \"16cab56c25544599980233fcffe52c75c88bbe1f66d21cc52ba8aee4be594830\"" Jan 20 01:42:06.180034 systemd[1]: Started cri-containerd-16cab56c25544599980233fcffe52c75c88bbe1f66d21cc52ba8aee4be594830.scope - libcontainer container 16cab56c25544599980233fcffe52c75c88bbe1f66d21cc52ba8aee4be594830. Jan 20 01:42:06.206271 containerd[1767]: time="2026-01-20T01:42:06.206228469Z" level=info msg="StartContainer for \"16cab56c25544599980233fcffe52c75c88bbe1f66d21cc52ba8aee4be594830\" returns successfully" Jan 20 01:42:06.217863 containerd[1767]: time="2026-01-20T01:42:06.217823952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-wmc7z,Uid:f93e4094-2bb5-48f9-8ac8-c7004534c28b,Namespace:tigera-operator,Attempt:0,}" Jan 20 01:42:06.256363 containerd[1767]: time="2026-01-20T01:42:06.256008122Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:42:06.256363 containerd[1767]: time="2026-01-20T01:42:06.256137882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:42:06.256363 containerd[1767]: time="2026-01-20T01:42:06.256153962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:06.256588 containerd[1767]: time="2026-01-20T01:42:06.256468682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:06.276552 systemd[1]: Started cri-containerd-e8dea84999d15e076cf3675a8c28b7142c77e1bce1ff753634ec1d54bdf7ff31.scope - libcontainer container e8dea84999d15e076cf3675a8c28b7142c77e1bce1ff753634ec1d54bdf7ff31. Jan 20 01:42:06.315136 containerd[1767]: time="2026-01-20T01:42:06.315089018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-wmc7z,Uid:f93e4094-2bb5-48f9-8ac8-c7004534c28b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e8dea84999d15e076cf3675a8c28b7142c77e1bce1ff753634ec1d54bdf7ff31\"" Jan 20 01:42:06.317100 containerd[1767]: time="2026-01-20T01:42:06.316892818Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 20 01:42:07.004487 kubelet[3178]: I0120 01:42:07.004424 3178 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h8j7h" podStartSLOduration=2.00440628 podStartE2EDuration="2.00440628s" podCreationTimestamp="2026-01-20 01:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:42:06.609334615 +0000 UTC m=+6.143283864" watchObservedRunningTime="2026-01-20 01:42:07.00440628 +0000 UTC m=+6.538355529" Jan 20 01:42:08.374302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount303299819.mount: Deactivated successfully. Jan 20 01:42:08.895827 containerd[1767]: time="2026-01-20T01:42:08.895114825Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:08.897227 containerd[1767]: time="2026-01-20T01:42:08.897194185Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Jan 20 01:42:08.899930 containerd[1767]: time="2026-01-20T01:42:08.899883344Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:08.903280 containerd[1767]: time="2026-01-20T01:42:08.903238143Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:08.904094 containerd[1767]: time="2026-01-20T01:42:08.903999383Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.586890285s" Jan 20 01:42:08.904094 containerd[1767]: time="2026-01-20T01:42:08.904027943Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 20 01:42:08.907295 containerd[1767]: time="2026-01-20T01:42:08.907181182Z" level=info msg="CreateContainer within sandbox \"e8dea84999d15e076cf3675a8c28b7142c77e1bce1ff753634ec1d54bdf7ff31\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 20 01:42:08.931324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1928269411.mount: Deactivated successfully. Jan 20 01:42:08.937180 containerd[1767]: time="2026-01-20T01:42:08.937108134Z" level=info msg="CreateContainer within sandbox \"e8dea84999d15e076cf3675a8c28b7142c77e1bce1ff753634ec1d54bdf7ff31\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ea075dc43a2671836f354839e33b70dea953a0efcbd8ee7c17a815e5e0006da0\"" Jan 20 01:42:08.938205 containerd[1767]: time="2026-01-20T01:42:08.937681774Z" level=info msg="StartContainer for \"ea075dc43a2671836f354839e33b70dea953a0efcbd8ee7c17a815e5e0006da0\"" Jan 20 01:42:08.965084 systemd[1]: Started cri-containerd-ea075dc43a2671836f354839e33b70dea953a0efcbd8ee7c17a815e5e0006da0.scope - libcontainer container ea075dc43a2671836f354839e33b70dea953a0efcbd8ee7c17a815e5e0006da0. Jan 20 01:42:08.988625 containerd[1767]: time="2026-01-20T01:42:08.988585639Z" level=info msg="StartContainer for \"ea075dc43a2671836f354839e33b70dea953a0efcbd8ee7c17a815e5e0006da0\" returns successfully" Jan 20 01:42:09.738480 kubelet[3178]: I0120 01:42:09.738416 3178 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-wmc7z" podStartSLOduration=2.149738385 podStartE2EDuration="4.73840067s" podCreationTimestamp="2026-01-20 01:42:05 +0000 UTC" firstStartedPulling="2026-01-20 01:42:06.316255018 +0000 UTC m=+5.850204267" lastFinishedPulling="2026-01-20 01:42:08.904917343 +0000 UTC m=+8.438866552" observedRunningTime="2026-01-20 01:42:09.619978183 +0000 UTC m=+9.153927432" watchObservedRunningTime="2026-01-20 01:42:09.73840067 +0000 UTC m=+9.272349919" Jan 20 01:42:11.493729 waagent[1916]: 2026-01-20T01:42:11.493661Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 20 01:42:11.500095 waagent[1916]: 2026-01-20T01:42:11.500044Z INFO ExtHandler Jan 20 01:42:11.500202 waagent[1916]: 2026-01-20T01:42:11.500167Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 0c5da7cc-44df-463b-9d9f-76a308672915 eTag: 8551222619812886235 source: Fabric] Jan 20 01:42:11.501769 waagent[1916]: 2026-01-20T01:42:11.500492Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 20 01:42:11.501769 waagent[1916]: 2026-01-20T01:42:11.501128Z INFO ExtHandler Jan 20 01:42:11.501769 waagent[1916]: 2026-01-20T01:42:11.501213Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 20 01:42:11.573007 waagent[1916]: 2026-01-20T01:42:11.572950Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 20 01:42:11.647490 waagent[1916]: 2026-01-20T01:42:11.647401Z INFO ExtHandler Downloaded certificate {'thumbprint': 'EF87708046FBD99058DE94840C7731114875FC25', 'hasPrivateKey': True} Jan 20 01:42:11.648310 waagent[1916]: 2026-01-20T01:42:11.647963Z INFO ExtHandler Fetch goal state completed Jan 20 01:42:11.648402 waagent[1916]: 2026-01-20T01:42:11.648361Z INFO ExtHandler ExtHandler Jan 20 01:42:11.648469 waagent[1916]: 2026-01-20T01:42:11.648440Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: f2e5561a-9cbf-4c38-a012-06fac4553893 correlation a81b7840-a8be-4d16-9234-e941613b84b8 created: 2026-01-20T01:42:03.002563Z] Jan 20 01:42:11.648930 waagent[1916]: 2026-01-20T01:42:11.648744Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 20 01:42:11.652925 waagent[1916]: 2026-01-20T01:42:11.651372Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 3 ms] Jan 20 01:42:14.708084 sudo[2237]: pam_unix(sudo:session): session closed for user root Jan 20 01:42:14.786300 sshd[2234]: pam_unix(sshd:session): session closed for user core Jan 20 01:42:14.790289 systemd[1]: sshd@6-10.200.20.33:22-10.200.16.10:53114.service: Deactivated successfully. Jan 20 01:42:14.793327 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 01:42:14.794007 systemd[1]: session-9.scope: Consumed 6.802s CPU time, 152.6M memory peak, 0B memory swap peak. Jan 20 01:42:14.798219 systemd-logind[1712]: Session 9 logged out. Waiting for processes to exit. Jan 20 01:42:14.799097 systemd-logind[1712]: Removed session 9. Jan 20 01:42:25.201639 kubelet[3178]: I0120 01:42:25.201569 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a4c5f9ed-727c-4196-971f-a61a2f373caa-typha-certs\") pod \"calico-typha-66c9f889cb-kmwjm\" (UID: \"a4c5f9ed-727c-4196-971f-a61a2f373caa\") " pod="calico-system/calico-typha-66c9f889cb-kmwjm" Jan 20 01:42:25.201639 kubelet[3178]: I0120 01:42:25.201607 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4c5f9ed-727c-4196-971f-a61a2f373caa-tigera-ca-bundle\") pod \"calico-typha-66c9f889cb-kmwjm\" (UID: \"a4c5f9ed-727c-4196-971f-a61a2f373caa\") " pod="calico-system/calico-typha-66c9f889cb-kmwjm" Jan 20 01:42:25.201639 kubelet[3178]: I0120 01:42:25.201628 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46z82\" (UniqueName: \"kubernetes.io/projected/a4c5f9ed-727c-4196-971f-a61a2f373caa-kube-api-access-46z82\") pod \"calico-typha-66c9f889cb-kmwjm\" (UID: \"a4c5f9ed-727c-4196-971f-a61a2f373caa\") " pod="calico-system/calico-typha-66c9f889cb-kmwjm" Jan 20 01:42:25.202037 systemd[1]: Created slice kubepods-besteffort-poda4c5f9ed_727c_4196_971f_a61a2f373caa.slice - libcontainer container kubepods-besteffort-poda4c5f9ed_727c_4196_971f_a61a2f373caa.slice. Jan 20 01:42:25.375421 systemd[1]: Created slice kubepods-besteffort-pod32fedd4c_3c06_436d_8c97_c6227f359b5f.slice - libcontainer container kubepods-besteffort-pod32fedd4c_3c06_436d_8c97_c6227f359b5f.slice. Jan 20 01:42:25.402662 kubelet[3178]: I0120 01:42:25.402415 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/32fedd4c-3c06-436d-8c97-c6227f359b5f-flexvol-driver-host\") pod \"calico-node-cjl8v\" (UID: \"32fedd4c-3c06-436d-8c97-c6227f359b5f\") " pod="calico-system/calico-node-cjl8v" Jan 20 01:42:25.402662 kubelet[3178]: I0120 01:42:25.402449 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/32fedd4c-3c06-436d-8c97-c6227f359b5f-node-certs\") pod \"calico-node-cjl8v\" (UID: \"32fedd4c-3c06-436d-8c97-c6227f359b5f\") " pod="calico-system/calico-node-cjl8v" Jan 20 01:42:25.402662 kubelet[3178]: I0120 01:42:25.402464 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32fedd4c-3c06-436d-8c97-c6227f359b5f-xtables-lock\") pod \"calico-node-cjl8v\" (UID: \"32fedd4c-3c06-436d-8c97-c6227f359b5f\") " pod="calico-system/calico-node-cjl8v" Jan 20 01:42:25.402662 kubelet[3178]: I0120 01:42:25.402478 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/32fedd4c-3c06-436d-8c97-c6227f359b5f-cni-bin-dir\") pod \"calico-node-cjl8v\" (UID: \"32fedd4c-3c06-436d-8c97-c6227f359b5f\") " pod="calico-system/calico-node-cjl8v" Jan 20 01:42:25.402662 kubelet[3178]: I0120 01:42:25.402494 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/32fedd4c-3c06-436d-8c97-c6227f359b5f-cni-net-dir\") pod \"calico-node-cjl8v\" (UID: \"32fedd4c-3c06-436d-8c97-c6227f359b5f\") " pod="calico-system/calico-node-cjl8v" Jan 20 01:42:25.402883 kubelet[3178]: I0120 01:42:25.402511 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg42k\" (UniqueName: \"kubernetes.io/projected/32fedd4c-3c06-436d-8c97-c6227f359b5f-kube-api-access-mg42k\") pod \"calico-node-cjl8v\" (UID: \"32fedd4c-3c06-436d-8c97-c6227f359b5f\") " pod="calico-system/calico-node-cjl8v" Jan 20 01:42:25.402883 kubelet[3178]: I0120 01:42:25.402527 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32fedd4c-3c06-436d-8c97-c6227f359b5f-lib-modules\") pod \"calico-node-cjl8v\" (UID: \"32fedd4c-3c06-436d-8c97-c6227f359b5f\") " pod="calico-system/calico-node-cjl8v" Jan 20 01:42:25.402883 kubelet[3178]: I0120 01:42:25.402543 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32fedd4c-3c06-436d-8c97-c6227f359b5f-tigera-ca-bundle\") pod \"calico-node-cjl8v\" (UID: \"32fedd4c-3c06-436d-8c97-c6227f359b5f\") " pod="calico-system/calico-node-cjl8v" Jan 20 01:42:25.402883 kubelet[3178]: I0120 01:42:25.402557 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/32fedd4c-3c06-436d-8c97-c6227f359b5f-var-lib-calico\") pod \"calico-node-cjl8v\" (UID: \"32fedd4c-3c06-436d-8c97-c6227f359b5f\") " pod="calico-system/calico-node-cjl8v" Jan 20 01:42:25.402883 kubelet[3178]: I0120 01:42:25.402577 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/32fedd4c-3c06-436d-8c97-c6227f359b5f-cni-log-dir\") pod \"calico-node-cjl8v\" (UID: \"32fedd4c-3c06-436d-8c97-c6227f359b5f\") " pod="calico-system/calico-node-cjl8v" Jan 20 01:42:25.403028 kubelet[3178]: I0120 01:42:25.402593 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/32fedd4c-3c06-436d-8c97-c6227f359b5f-policysync\") pod \"calico-node-cjl8v\" (UID: \"32fedd4c-3c06-436d-8c97-c6227f359b5f\") " pod="calico-system/calico-node-cjl8v" Jan 20 01:42:25.403028 kubelet[3178]: I0120 01:42:25.402607 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/32fedd4c-3c06-436d-8c97-c6227f359b5f-var-run-calico\") pod \"calico-node-cjl8v\" (UID: \"32fedd4c-3c06-436d-8c97-c6227f359b5f\") " pod="calico-system/calico-node-cjl8v" Jan 20 01:42:25.504642 kubelet[3178]: E0120 01:42:25.504621 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.504922 kubelet[3178]: W0120 01:42:25.504770 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.504922 kubelet[3178]: E0120 01:42:25.504796 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.505123 kubelet[3178]: E0120 01:42:25.505110 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.505185 kubelet[3178]: W0120 01:42:25.505175 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.505303 kubelet[3178]: E0120 01:42:25.505231 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.505456 kubelet[3178]: E0120 01:42:25.505445 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.505616 kubelet[3178]: W0120 01:42:25.505510 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.505616 kubelet[3178]: E0120 01:42:25.505526 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.505812 kubelet[3178]: E0120 01:42:25.505801 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.505876 kubelet[3178]: W0120 01:42:25.505866 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.508653 kubelet[3178]: E0120 01:42:25.505946 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.509205 kubelet[3178]: E0120 01:42:25.509177 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.509205 kubelet[3178]: W0120 01:42:25.509196 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.509205 kubelet[3178]: E0120 01:42:25.509210 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.509373 kubelet[3178]: E0120 01:42:25.509356 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.509373 kubelet[3178]: W0120 01:42:25.509369 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.509435 kubelet[3178]: E0120 01:42:25.509379 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.510195 kubelet[3178]: E0120 01:42:25.509508 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.510195 kubelet[3178]: W0120 01:42:25.509519 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.510195 kubelet[3178]: E0120 01:42:25.509527 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.510195 kubelet[3178]: E0120 01:42:25.509728 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.510195 kubelet[3178]: W0120 01:42:25.509736 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.510195 kubelet[3178]: E0120 01:42:25.509746 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.510195 kubelet[3178]: E0120 01:42:25.509892 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.510195 kubelet[3178]: W0120 01:42:25.509915 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.510195 kubelet[3178]: E0120 01:42:25.509925 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.510195 kubelet[3178]: E0120 01:42:25.510070 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.510448 kubelet[3178]: W0120 01:42:25.510078 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.510448 kubelet[3178]: E0120 01:42:25.510088 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.510493 kubelet[3178]: E0120 01:42:25.510444 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.510493 kubelet[3178]: W0120 01:42:25.510456 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.510493 kubelet[3178]: E0120 01:42:25.510480 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.512130 containerd[1767]: time="2026-01-20T01:42:25.510813568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66c9f889cb-kmwjm,Uid:a4c5f9ed-727c-4196-971f-a61a2f373caa,Namespace:calico-system,Attempt:0,}" Jan 20 01:42:25.512383 kubelet[3178]: E0120 01:42:25.511691 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.512383 kubelet[3178]: W0120 01:42:25.511705 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.512383 kubelet[3178]: E0120 01:42:25.511718 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.512383 kubelet[3178]: E0120 01:42:25.511988 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.512383 kubelet[3178]: W0120 01:42:25.511998 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.512383 kubelet[3178]: E0120 01:42:25.512009 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.512383 kubelet[3178]: E0120 01:42:25.512185 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.512383 kubelet[3178]: W0120 01:42:25.512194 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.512383 kubelet[3178]: E0120 01:42:25.512213 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.512383 kubelet[3178]: E0120 01:42:25.512362 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.512601 kubelet[3178]: W0120 01:42:25.512378 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.512601 kubelet[3178]: E0120 01:42:25.512388 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.512601 kubelet[3178]: E0120 01:42:25.512539 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.512601 kubelet[3178]: W0120 01:42:25.512554 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.512601 kubelet[3178]: E0120 01:42:25.512561 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.513388 kubelet[3178]: E0120 01:42:25.512828 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.513388 kubelet[3178]: W0120 01:42:25.512857 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.513388 kubelet[3178]: E0120 01:42:25.512993 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.513740 kubelet[3178]: E0120 01:42:25.513723 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.513813 kubelet[3178]: W0120 01:42:25.513801 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.514084 kubelet[3178]: E0120 01:42:25.514069 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.514164 kubelet[3178]: W0120 01:42:25.514153 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.514382 kubelet[3178]: E0120 01:42:25.514371 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.514517 kubelet[3178]: W0120 01:42:25.514443 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.514620 kubelet[3178]: E0120 01:42:25.514611 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.514948 kubelet[3178]: W0120 01:42:25.514849 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.514948 kubelet[3178]: E0120 01:42:25.514868 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.515249 kubelet[3178]: E0120 01:42:25.514754 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.515249 kubelet[3178]: E0120 01:42:25.514764 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.515503 kubelet[3178]: E0120 01:42:25.515333 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.515503 kubelet[3178]: W0120 01:42:25.515344 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.515503 kubelet[3178]: E0120 01:42:25.515361 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.515606 kubelet[3178]: E0120 01:42:25.514769 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.515764 kubelet[3178]: E0120 01:42:25.515677 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.515764 kubelet[3178]: W0120 01:42:25.515688 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.515857 kubelet[3178]: E0120 01:42:25.515697 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.516181 kubelet[3178]: E0120 01:42:25.516168 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.516313 kubelet[3178]: W0120 01:42:25.516248 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.516313 kubelet[3178]: E0120 01:42:25.516272 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.516635 kubelet[3178]: E0120 01:42:25.516543 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.516635 kubelet[3178]: W0120 01:42:25.516555 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.516635 kubelet[3178]: E0120 01:42:25.516572 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.516941 kubelet[3178]: E0120 01:42:25.516871 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.516941 kubelet[3178]: W0120 01:42:25.516884 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.517043 kubelet[3178]: E0120 01:42:25.516924 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.517223 kubelet[3178]: E0120 01:42:25.517206 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.517223 kubelet[3178]: W0120 01:42:25.517221 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.517314 kubelet[3178]: E0120 01:42:25.517238 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.517428 kubelet[3178]: E0120 01:42:25.517416 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.517460 kubelet[3178]: W0120 01:42:25.517427 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.517556 kubelet[3178]: E0120 01:42:25.517500 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.517640 kubelet[3178]: E0120 01:42:25.517622 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.517640 kubelet[3178]: W0120 01:42:25.517636 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.517701 kubelet[3178]: E0120 01:42:25.517649 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.519238 kubelet[3178]: E0120 01:42:25.519164 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.519238 kubelet[3178]: W0120 01:42:25.519186 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.519238 kubelet[3178]: E0120 01:42:25.519206 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.531548 kubelet[3178]: E0120 01:42:25.531527 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.531548 kubelet[3178]: W0120 01:42:25.531544 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.531646 kubelet[3178]: E0120 01:42:25.531559 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.550970 containerd[1767]: time="2026-01-20T01:42:25.550058443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:42:25.551082 containerd[1767]: time="2026-01-20T01:42:25.550986163Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:42:25.551082 containerd[1767]: time="2026-01-20T01:42:25.551017203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:25.551216 containerd[1767]: time="2026-01-20T01:42:25.551161203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:25.573062 systemd[1]: Started cri-containerd-51e0c0e3cc6e01b846ecc215809ccfd4cb778a676f90da8db4077124f23329bc.scope - libcontainer container 51e0c0e3cc6e01b846ecc215809ccfd4cb778a676f90da8db4077124f23329bc. Jan 20 01:42:25.589245 kubelet[3178]: E0120 01:42:25.588871 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8z22" podUID="e68b55a2-bd34-4f7b-b8c9-be9ad16a2026" Jan 20 01:42:25.595338 kubelet[3178]: E0120 01:42:25.595315 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.595446 kubelet[3178]: W0120 01:42:25.595337 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.595446 kubelet[3178]: E0120 01:42:25.595361 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.596101 kubelet[3178]: E0120 01:42:25.596074 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.596179 kubelet[3178]: W0120 01:42:25.596097 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.596179 kubelet[3178]: E0120 01:42:25.596149 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.596681 kubelet[3178]: E0120 01:42:25.596661 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.596681 kubelet[3178]: W0120 01:42:25.596678 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.596852 kubelet[3178]: E0120 01:42:25.596798 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.597316 kubelet[3178]: E0120 01:42:25.597131 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.597316 kubelet[3178]: W0120 01:42:25.597313 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.597408 kubelet[3178]: E0120 01:42:25.597327 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.597623 kubelet[3178]: E0120 01:42:25.597605 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.597623 kubelet[3178]: W0120 01:42:25.597619 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.597695 kubelet[3178]: E0120 01:42:25.597630 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.598029 kubelet[3178]: E0120 01:42:25.598013 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.598029 kubelet[3178]: W0120 01:42:25.598026 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.598203 kubelet[3178]: E0120 01:42:25.598037 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.598397 kubelet[3178]: E0120 01:42:25.598380 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.598397 kubelet[3178]: W0120 01:42:25.598394 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.598475 kubelet[3178]: E0120 01:42:25.598404 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.598859 kubelet[3178]: E0120 01:42:25.598843 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.598859 kubelet[3178]: W0120 01:42:25.598857 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.598955 kubelet[3178]: E0120 01:42:25.598868 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.599327 kubelet[3178]: E0120 01:42:25.599311 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.599327 kubelet[3178]: W0120 01:42:25.599324 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.599490 kubelet[3178]: E0120 01:42:25.599335 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.599779 kubelet[3178]: E0120 01:42:25.599757 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.599881 kubelet[3178]: W0120 01:42:25.599865 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.599946 kubelet[3178]: E0120 01:42:25.599882 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.600987 kubelet[3178]: E0120 01:42:25.600881 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.600987 kubelet[3178]: W0120 01:42:25.600925 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.600987 kubelet[3178]: E0120 01:42:25.600939 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.602514 kubelet[3178]: E0120 01:42:25.602495 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.602514 kubelet[3178]: W0120 01:42:25.602514 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.602643 kubelet[3178]: E0120 01:42:25.602526 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.602740 kubelet[3178]: E0120 01:42:25.602728 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.602740 kubelet[3178]: W0120 01:42:25.602739 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.602796 kubelet[3178]: E0120 01:42:25.602748 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.603089 kubelet[3178]: E0120 01:42:25.603071 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.603089 kubelet[3178]: W0120 01:42:25.603087 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.603246 kubelet[3178]: E0120 01:42:25.603097 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.603432 kubelet[3178]: E0120 01:42:25.603418 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.603462 kubelet[3178]: W0120 01:42:25.603432 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.603462 kubelet[3178]: E0120 01:42:25.603443 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.603651 kubelet[3178]: E0120 01:42:25.603638 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.603651 kubelet[3178]: W0120 01:42:25.603648 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.603728 kubelet[3178]: E0120 01:42:25.603658 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.603895 kubelet[3178]: E0120 01:42:25.603881 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.603956 kubelet[3178]: W0120 01:42:25.603893 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.603956 kubelet[3178]: E0120 01:42:25.603930 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.604196 kubelet[3178]: E0120 01:42:25.604183 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.604196 kubelet[3178]: W0120 01:42:25.604194 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.604268 kubelet[3178]: E0120 01:42:25.604205 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.604398 kubelet[3178]: E0120 01:42:25.604385 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.604398 kubelet[3178]: W0120 01:42:25.604395 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.604464 kubelet[3178]: E0120 01:42:25.604404 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.604626 kubelet[3178]: E0120 01:42:25.604612 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.604626 kubelet[3178]: W0120 01:42:25.604623 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.604696 kubelet[3178]: E0120 01:42:25.604632 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.605096 kubelet[3178]: E0120 01:42:25.605078 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.605096 kubelet[3178]: W0120 01:42:25.605094 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.605184 kubelet[3178]: E0120 01:42:25.605105 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.605184 kubelet[3178]: I0120 01:42:25.605133 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e68b55a2-bd34-4f7b-b8c9-be9ad16a2026-kubelet-dir\") pod \"csi-node-driver-n8z22\" (UID: \"e68b55a2-bd34-4f7b-b8c9-be9ad16a2026\") " pod="calico-system/csi-node-driver-n8z22" Jan 20 01:42:25.605411 kubelet[3178]: E0120 01:42:25.605392 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.605411 kubelet[3178]: W0120 01:42:25.605408 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.605469 kubelet[3178]: E0120 01:42:25.605423 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.605469 kubelet[3178]: I0120 01:42:25.605439 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e68b55a2-bd34-4f7b-b8c9-be9ad16a2026-registration-dir\") pod \"csi-node-driver-n8z22\" (UID: \"e68b55a2-bd34-4f7b-b8c9-be9ad16a2026\") " pod="calico-system/csi-node-driver-n8z22" Jan 20 01:42:25.605810 kubelet[3178]: E0120 01:42:25.605794 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.605810 kubelet[3178]: W0120 01:42:25.605809 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.605879 kubelet[3178]: E0120 01:42:25.605829 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.605879 kubelet[3178]: I0120 01:42:25.605845 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65nqh\" (UniqueName: \"kubernetes.io/projected/e68b55a2-bd34-4f7b-b8c9-be9ad16a2026-kube-api-access-65nqh\") pod \"csi-node-driver-n8z22\" (UID: \"e68b55a2-bd34-4f7b-b8c9-be9ad16a2026\") " pod="calico-system/csi-node-driver-n8z22" Jan 20 01:42:25.606162 kubelet[3178]: E0120 01:42:25.606145 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.606162 kubelet[3178]: W0120 01:42:25.606159 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.606245 kubelet[3178]: E0120 01:42:25.606173 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.606245 kubelet[3178]: I0120 01:42:25.606195 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e68b55a2-bd34-4f7b-b8c9-be9ad16a2026-varrun\") pod \"csi-node-driver-n8z22\" (UID: \"e68b55a2-bd34-4f7b-b8c9-be9ad16a2026\") " pod="calico-system/csi-node-driver-n8z22" Jan 20 01:42:25.606441 kubelet[3178]: E0120 01:42:25.606426 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.606482 kubelet[3178]: W0120 01:42:25.606455 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.606554 kubelet[3178]: E0120 01:42:25.606534 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.606589 kubelet[3178]: I0120 01:42:25.606555 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e68b55a2-bd34-4f7b-b8c9-be9ad16a2026-socket-dir\") pod \"csi-node-driver-n8z22\" (UID: \"e68b55a2-bd34-4f7b-b8c9-be9ad16a2026\") " pod="calico-system/csi-node-driver-n8z22" Jan 20 01:42:25.606732 kubelet[3178]: E0120 01:42:25.606718 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.606732 kubelet[3178]: W0120 01:42:25.606729 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.606848 kubelet[3178]: E0120 01:42:25.606821 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.608274 kubelet[3178]: E0120 01:42:25.606961 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.608274 kubelet[3178]: W0120 01:42:25.606975 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.608274 kubelet[3178]: E0120 01:42:25.607081 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.608274 kubelet[3178]: E0120 01:42:25.607208 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.608274 kubelet[3178]: W0120 01:42:25.607235 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.608274 kubelet[3178]: E0120 01:42:25.607254 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.608274 kubelet[3178]: E0120 01:42:25.607435 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.608274 kubelet[3178]: W0120 01:42:25.607445 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.608274 kubelet[3178]: E0120 01:42:25.607479 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.608274 kubelet[3178]: E0120 01:42:25.607649 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.608537 kubelet[3178]: W0120 01:42:25.607658 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.608537 kubelet[3178]: E0120 01:42:25.607670 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.608537 kubelet[3178]: E0120 01:42:25.607834 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.608537 kubelet[3178]: W0120 01:42:25.607844 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.608537 kubelet[3178]: E0120 01:42:25.607872 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.608537 kubelet[3178]: E0120 01:42:25.608146 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.608537 kubelet[3178]: W0120 01:42:25.608158 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.608537 kubelet[3178]: E0120 01:42:25.608167 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.608839 kubelet[3178]: E0120 01:42:25.608824 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.608839 kubelet[3178]: W0120 01:42:25.608836 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.609000 kubelet[3178]: E0120 01:42:25.608847 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.609132 kubelet[3178]: E0120 01:42:25.609115 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.609132 kubelet[3178]: W0120 01:42:25.609129 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.609195 kubelet[3178]: E0120 01:42:25.609140 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.609330 kubelet[3178]: E0120 01:42:25.609318 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.609367 kubelet[3178]: W0120 01:42:25.609328 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.609367 kubelet[3178]: E0120 01:42:25.609349 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.628410 containerd[1767]: time="2026-01-20T01:42:25.628378194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66c9f889cb-kmwjm,Uid:a4c5f9ed-727c-4196-971f-a61a2f373caa,Namespace:calico-system,Attempt:0,} returns sandbox id \"51e0c0e3cc6e01b846ecc215809ccfd4cb778a676f90da8db4077124f23329bc\"" Jan 20 01:42:25.630258 containerd[1767]: time="2026-01-20T01:42:25.630172234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 20 01:42:25.680114 containerd[1767]: time="2026-01-20T01:42:25.680082588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cjl8v,Uid:32fedd4c-3c06-436d-8c97-c6227f359b5f,Namespace:calico-system,Attempt:0,}" Jan 20 01:42:25.708351 kubelet[3178]: E0120 01:42:25.707939 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.708351 kubelet[3178]: W0120 01:42:25.707962 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.708351 kubelet[3178]: E0120 01:42:25.707980 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.710403 kubelet[3178]: E0120 01:42:25.710374 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.710403 kubelet[3178]: W0120 01:42:25.710392 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.710738 kubelet[3178]: E0120 01:42:25.710500 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.711046 kubelet[3178]: E0120 01:42:25.711020 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.711046 kubelet[3178]: W0120 01:42:25.711035 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.711307 kubelet[3178]: E0120 01:42:25.711280 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.711823 kubelet[3178]: E0120 01:42:25.711703 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.711823 kubelet[3178]: W0120 01:42:25.711720 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.711823 kubelet[3178]: E0120 01:42:25.711757 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.712331 kubelet[3178]: E0120 01:42:25.712299 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.712331 kubelet[3178]: W0120 01:42:25.712319 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.712489 kubelet[3178]: E0120 01:42:25.712454 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.713825 kubelet[3178]: E0120 01:42:25.713798 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.713825 kubelet[3178]: W0120 01:42:25.713822 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.714029 kubelet[3178]: E0120 01:42:25.713958 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.714257 kubelet[3178]: E0120 01:42:25.714240 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.714257 kubelet[3178]: W0120 01:42:25.714255 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.714362 kubelet[3178]: E0120 01:42:25.714345 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.714520 kubelet[3178]: E0120 01:42:25.714501 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.714520 kubelet[3178]: W0120 01:42:25.714518 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.714742 kubelet[3178]: E0120 01:42:25.714541 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.714742 kubelet[3178]: E0120 01:42:25.714681 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.714742 kubelet[3178]: W0120 01:42:25.714689 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.714742 kubelet[3178]: E0120 01:42:25.714709 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.714894 kubelet[3178]: E0120 01:42:25.714862 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.714894 kubelet[3178]: W0120 01:42:25.714874 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.714894 kubelet[3178]: E0120 01:42:25.714887 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.715443 kubelet[3178]: E0120 01:42:25.715413 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.715443 kubelet[3178]: W0120 01:42:25.715437 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.715799 kubelet[3178]: E0120 01:42:25.715457 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.715799 kubelet[3178]: E0120 01:42:25.715697 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.715799 kubelet[3178]: W0120 01:42:25.715709 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.715799 kubelet[3178]: E0120 01:42:25.715739 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.716298 kubelet[3178]: E0120 01:42:25.716063 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.716298 kubelet[3178]: W0120 01:42:25.716082 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.716298 kubelet[3178]: E0120 01:42:25.716109 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.716298 kubelet[3178]: E0120 01:42:25.716278 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.716298 kubelet[3178]: W0120 01:42:25.716289 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.716960 kubelet[3178]: E0120 01:42:25.716467 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.717171 kubelet[3178]: E0120 01:42:25.717150 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.717171 kubelet[3178]: W0120 01:42:25.717168 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.717311 kubelet[3178]: E0120 01:42:25.717198 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.717652 kubelet[3178]: E0120 01:42:25.717630 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.717652 kubelet[3178]: W0120 01:42:25.717650 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.717881 kubelet[3178]: E0120 01:42:25.717678 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.717881 kubelet[3178]: E0120 01:42:25.717833 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.717881 kubelet[3178]: W0120 01:42:25.717844 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.718367 kubelet[3178]: E0120 01:42:25.718339 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.718838 kubelet[3178]: E0120 01:42:25.718750 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.718838 kubelet[3178]: W0120 01:42:25.718778 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.718838 kubelet[3178]: E0120 01:42:25.718837 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.719165 kubelet[3178]: E0120 01:42:25.718990 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.719165 kubelet[3178]: W0120 01:42:25.718998 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.719165 kubelet[3178]: E0120 01:42:25.719023 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.720072 kubelet[3178]: E0120 01:42:25.719980 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.720072 kubelet[3178]: W0120 01:42:25.720006 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.720072 kubelet[3178]: E0120 01:42:25.720036 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.721853 kubelet[3178]: E0120 01:42:25.721832 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.721853 kubelet[3178]: W0120 01:42:25.721849 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.722316 kubelet[3178]: E0120 01:42:25.722254 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.722749 kubelet[3178]: E0120 01:42:25.722582 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.722749 kubelet[3178]: W0120 01:42:25.722597 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.722835 kubelet[3178]: E0120 01:42:25.722764 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.723861 kubelet[3178]: E0120 01:42:25.723832 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.723861 kubelet[3178]: W0120 01:42:25.723852 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.726350 kubelet[3178]: E0120 01:42:25.726326 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.726350 kubelet[3178]: W0120 01:42:25.726344 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.726561 kubelet[3178]: E0120 01:42:25.726358 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.726561 kubelet[3178]: E0120 01:42:25.726556 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.728702 kubelet[3178]: E0120 01:42:25.728678 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.728702 kubelet[3178]: W0120 01:42:25.728696 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.728998 kubelet[3178]: E0120 01:42:25.728709 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.735754 containerd[1767]: time="2026-01-20T01:42:25.735500902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:42:25.735754 containerd[1767]: time="2026-01-20T01:42:25.735547062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:42:25.735754 containerd[1767]: time="2026-01-20T01:42:25.735568062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:25.735754 containerd[1767]: time="2026-01-20T01:42:25.735649382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:25.755207 systemd[1]: Started cri-containerd-a0e2730585597377a6ca6c9f9612a147c0ff88d6aa143f62b3cd5bdf7a78cde9.scope - libcontainer container a0e2730585597377a6ca6c9f9612a147c0ff88d6aa143f62b3cd5bdf7a78cde9. Jan 20 01:42:25.755698 kubelet[3178]: E0120 01:42:25.755206 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:25.755698 kubelet[3178]: W0120 01:42:25.755220 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:25.755698 kubelet[3178]: E0120 01:42:25.755235 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:25.785744 containerd[1767]: time="2026-01-20T01:42:25.785419336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cjl8v,Uid:32fedd4c-3c06-436d-8c97-c6227f359b5f,Namespace:calico-system,Attempt:0,} returns sandbox id \"a0e2730585597377a6ca6c9f9612a147c0ff88d6aa143f62b3cd5bdf7a78cde9\"" Jan 20 01:42:26.752445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4160977705.mount: Deactivated successfully. Jan 20 01:42:27.403252 containerd[1767]: time="2026-01-20T01:42:27.403211363Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:27.405068 containerd[1767]: time="2026-01-20T01:42:27.404870403Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Jan 20 01:42:27.407329 containerd[1767]: time="2026-01-20T01:42:27.407305323Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:27.411216 containerd[1767]: time="2026-01-20T01:42:27.410999443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:27.411641 containerd[1767]: time="2026-01-20T01:42:27.411614523Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.781406409s" Jan 20 01:42:27.411697 containerd[1767]: time="2026-01-20T01:42:27.411641243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 20 01:42:27.412753 containerd[1767]: time="2026-01-20T01:42:27.412726003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 20 01:42:27.426764 containerd[1767]: time="2026-01-20T01:42:27.426735163Z" level=info msg="CreateContainer within sandbox \"51e0c0e3cc6e01b846ecc215809ccfd4cb778a676f90da8db4077124f23329bc\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 20 01:42:27.453769 containerd[1767]: time="2026-01-20T01:42:27.453724242Z" level=info msg="CreateContainer within sandbox \"51e0c0e3cc6e01b846ecc215809ccfd4cb778a676f90da8db4077124f23329bc\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"48a40efe94feddace2e932c0baa48be3e66274ca3223ab8d7d17d36bea3c69e8\"" Jan 20 01:42:27.454196 containerd[1767]: time="2026-01-20T01:42:27.454168722Z" level=info msg="StartContainer for \"48a40efe94feddace2e932c0baa48be3e66274ca3223ab8d7d17d36bea3c69e8\"" Jan 20 01:42:27.484319 systemd[1]: Started cri-containerd-48a40efe94feddace2e932c0baa48be3e66274ca3223ab8d7d17d36bea3c69e8.scope - libcontainer container 48a40efe94feddace2e932c0baa48be3e66274ca3223ab8d7d17d36bea3c69e8. Jan 20 01:42:27.528747 containerd[1767]: time="2026-01-20T01:42:27.528641882Z" level=info msg="StartContainer for \"48a40efe94feddace2e932c0baa48be3e66274ca3223ab8d7d17d36bea3c69e8\" returns successfully" Jan 20 01:42:27.568695 kubelet[3178]: E0120 01:42:27.568653 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8z22" podUID="e68b55a2-bd34-4f7b-b8c9-be9ad16a2026" Jan 20 01:42:27.720077 kubelet[3178]: E0120 01:42:27.719848 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:27.720077 kubelet[3178]: W0120 01:42:27.719870 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:27.720077 kubelet[3178]: E0120 01:42:27.719993 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:27.721389 kubelet[3178]: E0120 01:42:27.720525 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:27.721389 kubelet[3178]: W0120 01:42:27.720538 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:27.721389 kubelet[3178]: E0120 01:42:27.720835 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:27.721688 kubelet[3178]: E0120 01:42:27.721583 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:27.721688 kubelet[3178]: W0120 01:42:27.721597 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:27.721688 kubelet[3178]: E0120 01:42:27.721621 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:27.722165 kubelet[3178]: E0120 01:42:27.722050 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:27.722165 kubelet[3178]: W0120 01:42:27.722064 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:27.722165 kubelet[3178]: E0120 01:42:27.722074 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:27.723257 kubelet[3178]: E0120 01:42:27.723058 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:27.723257 kubelet[3178]: W0120 01:42:27.723071 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:27.723257 kubelet[3178]: E0120 01:42:27.723083 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:27.723526 kubelet[3178]: E0120 01:42:27.723470 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:27.723526 kubelet[3178]: W0120 01:42:27.723482 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:27.723526 kubelet[3178]: E0120 01:42:27.723493 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:27.723860 kubelet[3178]: E0120 01:42:27.723769 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:27.723860 kubelet[3178]: W0120 01:42:27.723780 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:27.723860 kubelet[3178]: E0120 01:42:27.723790 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:27.724334 kubelet[3178]: E0120 01:42:27.724142 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:27.724334 kubelet[3178]: W0120 01:42:27.724154 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:27.724334 kubelet[3178]: E0120 01:42:27.724164 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:27.724987 kubelet[3178]: E0120 01:42:27.724878 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:27.724987 kubelet[3178]: W0120 01:42:27.724892 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:27.724987 kubelet[3178]: E0120 01:42:27.724920 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:27.725309 kubelet[3178]: E0120 01:42:27.725220 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:27.725309 kubelet[3178]: W0120 01:42:27.725232 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:27.725309 kubelet[3178]: E0120 01:42:27.725242 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:27.725865 kubelet[3178]: E0120 01:42:27.725668 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:27.725865 kubelet[3178]: W0120 01:42:27.725681 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:27.725865 kubelet[3178]: E0120 01:42:27.725692 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:27.726574 kubelet[3178]: E0120 01:42:27.726460 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:27.726574 kubelet[3178]: W0120 01:42:27.726473 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:27.726574 kubelet[3178]: E0120 01:42:27.726484 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:27.727033 kubelet[3178]: E0120 01:42:27.726828 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:27.727033 kubelet[3178]: W0120 01:42:27.726840 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:27.727033 kubelet[3178]: E0120 01:42:27.726851 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:27.727740 kubelet[3178]: E0120 01:42:27.727639 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:27.727740 kubelet[3178]: W0120 01:42:27.727653 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:27.727740 kubelet[3178]: E0120 01:42:27.727664 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:27.728089 kubelet[3178]: E0120 01:42:27.727841 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:27.728089 kubelet[3178]: W0120 01:42:27.727849 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:27.728651 kubelet[3178]: E0120 01:42:27.727859 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:27.728889 kubelet[3178]: E0120 01:42:27.728876 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:27.728965 kubelet[3178]: W0120 01:42:27.728954 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:27.729209 kubelet[3178]: E0120 01:42:27.729019 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:27.729496 kubelet[3178]: E0120 01:42:27.729480 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:27.729984 kubelet[3178]: W0120 01:42:27.729598 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:27.729984 kubelet[3178]: E0120 01:42:27.729627 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:27.730850 kubelet[3178]: E0120 01:42:27.730827 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:27.730850 kubelet[3178]: W0120 01:42:27.730845 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:27.730963 kubelet[3178]: E0120 01:42:27.730868 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:27.731066 kubelet[3178]: E0120 01:42:27.731053 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:27.731066 kubelet[3178]: W0120 01:42:27.731064 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:27.731143 kubelet[3178]: E0120 01:42:27.731081 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:27.731228 kubelet[3178]: E0120 01:42:27.731217 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:27.731228 kubelet[3178]: W0120 01:42:27.731226 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:27.731305 kubelet[3178]: E0120 01:42:27.731282 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:27.731422 kubelet[3178]: E0120 01:42:27.731372 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:27.731422 kubelet[3178]: W0120 01:42:27.731380 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:27.731700 kubelet[3178]: E0120 01:42:27.731496 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:27.731700 kubelet[3178]: W0120 01:42:27.731503 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:27.731700 kubelet[3178]: E0120 01:42:27.731580 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:27.731700 kubelet[3178]: E0120 01:42:27.731635 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:27.732374 kubelet[3178]: E0120 01:42:27.732348 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:27.732374 kubelet[3178]: W0120 01:42:27.732370 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:27.732469 kubelet[3178]: E0120 01:42:27.732404 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:27.734017 kubelet[3178]: E0120 01:42:27.733987 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:27.734017 kubelet[3178]: W0120 01:42:27.734010 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:27.734221 kubelet[3178]: E0120 01:42:27.734028 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:27.734322 kubelet[3178]: E0120 01:42:27.734309 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:27.734322 kubelet[3178]: W0120 01:42:27.734321 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:27.734491 kubelet[3178]: E0120 01:42:27.734397 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:27.734539 kubelet[3178]: E0120 01:42:27.734503 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:27.734539 kubelet[3178]: W0120 01:42:27.734511 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:27.734749 kubelet[3178]: E0120 01:42:27.734629 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:27.734749 kubelet[3178]: E0120 01:42:27.734675 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:27.734749 kubelet[3178]: W0120 01:42:27.734681 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:27.734749 kubelet[3178]: E0120 01:42:27.734704 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:27.735132 kubelet[3178]: E0120 01:42:27.735109 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:27.735132 kubelet[3178]: W0120 01:42:27.735126 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:27.735235 kubelet[3178]: E0120 01:42:27.735147 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:27.735361 kubelet[3178]: E0120 01:42:27.735343 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:27.735361 kubelet[3178]: W0120 01:42:27.735356 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:27.735426 kubelet[3178]: E0120 01:42:27.735375 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:27.735726 kubelet[3178]: E0120 01:42:27.735695 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:27.735726 kubelet[3178]: W0120 01:42:27.735710 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:27.735726 kubelet[3178]: E0120 01:42:27.735724 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:27.736932 kubelet[3178]: E0120 01:42:27.735907 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:27.736932 kubelet[3178]: W0120 01:42:27.735918 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:27.736932 kubelet[3178]: E0120 01:42:27.735928 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:27.737369 kubelet[3178]: E0120 01:42:27.737232 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:27.737369 kubelet[3178]: W0120 01:42:27.737247 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:27.737369 kubelet[3178]: E0120 01:42:27.737269 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:27.737602 kubelet[3178]: E0120 01:42:27.737559 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:27.737602 kubelet[3178]: W0120 01:42:27.737570 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:27.737602 kubelet[3178]: E0120 01:42:27.737580 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:28.635958 containerd[1767]: time="2026-01-20T01:42:28.635912837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:28.637805 containerd[1767]: time="2026-01-20T01:42:28.637496237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Jan 20 01:42:28.641680 kubelet[3178]: I0120 01:42:28.640704 3178 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 01:42:28.641949 containerd[1767]: time="2026-01-20T01:42:28.640737517Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:28.644072 containerd[1767]: time="2026-01-20T01:42:28.644030317Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:28.644592 containerd[1767]: time="2026-01-20T01:42:28.644559477Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.231805794s" Jan 20 01:42:28.644649 containerd[1767]: time="2026-01-20T01:42:28.644591717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 20 01:42:28.648033 containerd[1767]: time="2026-01-20T01:42:28.648002837Z" level=info msg="CreateContainer within sandbox \"a0e2730585597377a6ca6c9f9612a147c0ff88d6aa143f62b3cd5bdf7a78cde9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 20 01:42:28.675638 containerd[1767]: time="2026-01-20T01:42:28.675541637Z" level=info msg="CreateContainer within sandbox \"a0e2730585597377a6ca6c9f9612a147c0ff88d6aa143f62b3cd5bdf7a78cde9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9c937ee24f50bec05c670213817a0dcc9a1bf8057593811bcb5909c1f9eef3e4\"" Jan 20 01:42:28.676112 containerd[1767]: time="2026-01-20T01:42:28.676078957Z" level=info msg="StartContainer for \"9c937ee24f50bec05c670213817a0dcc9a1bf8057593811bcb5909c1f9eef3e4\"" Jan 20 01:42:28.707028 systemd[1]: Started cri-containerd-9c937ee24f50bec05c670213817a0dcc9a1bf8057593811bcb5909c1f9eef3e4.scope - libcontainer container 9c937ee24f50bec05c670213817a0dcc9a1bf8057593811bcb5909c1f9eef3e4. Jan 20 01:42:28.734030 kubelet[3178]: E0120 01:42:28.734005 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:28.735203 kubelet[3178]: W0120 01:42:28.734110 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:28.735203 kubelet[3178]: E0120 01:42:28.734132 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:28.735203 kubelet[3178]: E0120 01:42:28.734381 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:28.735203 kubelet[3178]: W0120 01:42:28.734402 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:28.735203 kubelet[3178]: E0120 01:42:28.734415 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:28.735203 kubelet[3178]: E0120 01:42:28.735060 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:28.735203 kubelet[3178]: W0120 01:42:28.735072 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:28.735203 kubelet[3178]: E0120 01:42:28.735084 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:28.736938 containerd[1767]: time="2026-01-20T01:42:28.735597677Z" level=info msg="StartContainer for \"9c937ee24f50bec05c670213817a0dcc9a1bf8057593811bcb5909c1f9eef3e4\" returns successfully" Jan 20 01:42:28.737515 kubelet[3178]: E0120 01:42:28.737502 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:28.737598 kubelet[3178]: W0120 01:42:28.737587 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:28.737659 kubelet[3178]: E0120 01:42:28.737646 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:28.738267 kubelet[3178]: E0120 01:42:28.738214 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:28.738530 kubelet[3178]: W0120 01:42:28.738512 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:28.738755 kubelet[3178]: E0120 01:42:28.738692 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:28.740028 kubelet[3178]: E0120 01:42:28.739959 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:28.740028 kubelet[3178]: W0120 01:42:28.740021 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:28.740137 kubelet[3178]: E0120 01:42:28.740034 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:28.740230 kubelet[3178]: E0120 01:42:28.740211 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:28.740230 kubelet[3178]: W0120 01:42:28.740224 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:28.740291 kubelet[3178]: E0120 01:42:28.740233 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:28.740508 kubelet[3178]: E0120 01:42:28.740490 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:28.740586 kubelet[3178]: W0120 01:42:28.740520 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:28.740586 kubelet[3178]: E0120 01:42:28.740533 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:28.740753 kubelet[3178]: E0120 01:42:28.740737 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:28.740753 kubelet[3178]: W0120 01:42:28.740750 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:28.740831 kubelet[3178]: E0120 01:42:28.740760 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:28.740935 kubelet[3178]: E0120 01:42:28.740919 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:28.740935 kubelet[3178]: W0120 01:42:28.740930 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:28.741008 kubelet[3178]: E0120 01:42:28.740939 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:28.741150 kubelet[3178]: E0120 01:42:28.741135 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:28.741150 kubelet[3178]: W0120 01:42:28.741146 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:28.741232 kubelet[3178]: E0120 01:42:28.741155 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:28.741330 kubelet[3178]: E0120 01:42:28.741314 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:28.741446 kubelet[3178]: W0120 01:42:28.741326 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:28.741446 kubelet[3178]: E0120 01:42:28.741347 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:28.741667 kubelet[3178]: E0120 01:42:28.741511 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:28.741667 kubelet[3178]: W0120 01:42:28.741524 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:28.741667 kubelet[3178]: E0120 01:42:28.741532 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:28.741777 kubelet[3178]: E0120 01:42:28.741727 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:28.741777 kubelet[3178]: W0120 01:42:28.741756 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:28.741777 kubelet[3178]: E0120 01:42:28.741767 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:28.742530 kubelet[3178]: E0120 01:42:28.741972 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:28.742530 kubelet[3178]: W0120 01:42:28.741984 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:28.742530 kubelet[3178]: E0120 01:42:28.742008 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:28.742530 kubelet[3178]: E0120 01:42:28.742280 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:28.742530 kubelet[3178]: W0120 01:42:28.742323 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:28.742530 kubelet[3178]: E0120 01:42:28.742335 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:28.742530 kubelet[3178]: E0120 01:42:28.742529 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:28.742530 kubelet[3178]: W0120 01:42:28.742539 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:28.742757 kubelet[3178]: E0120 01:42:28.742548 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:28.742757 kubelet[3178]: E0120 01:42:28.742686 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:28.742757 kubelet[3178]: W0120 01:42:28.742692 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:28.742757 kubelet[3178]: E0120 01:42:28.742700 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:28.743596 kubelet[3178]: E0120 01:42:28.742844 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:28.743596 kubelet[3178]: W0120 01:42:28.742857 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:28.743596 kubelet[3178]: E0120 01:42:28.742866 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:28.743596 kubelet[3178]: E0120 01:42:28.743104 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:28.743596 kubelet[3178]: W0120 01:42:28.743114 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:28.743596 kubelet[3178]: E0120 01:42:28.743123 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:28.743596 kubelet[3178]: E0120 01:42:28.743244 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:28.743596 kubelet[3178]: W0120 01:42:28.743255 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:28.743596 kubelet[3178]: E0120 01:42:28.743262 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:28.743596 kubelet[3178]: E0120 01:42:28.743397 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:28.743865 kubelet[3178]: W0120 01:42:28.743404 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:28.743865 kubelet[3178]: E0120 01:42:28.743412 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:28.745440 kubelet[3178]: E0120 01:42:28.745302 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:28.745440 kubelet[3178]: W0120 01:42:28.745327 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:28.745440 kubelet[3178]: E0120 01:42:28.745346 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:28.745742 kubelet[3178]: E0120 01:42:28.745544 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:28.745742 kubelet[3178]: W0120 01:42:28.745559 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:28.745742 kubelet[3178]: E0120 01:42:28.745595 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:28.746033 kubelet[3178]: E0120 01:42:28.745856 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:28.746033 kubelet[3178]: W0120 01:42:28.745868 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:28.746033 kubelet[3178]: E0120 01:42:28.745894 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:28.746643 kubelet[3178]: E0120 01:42:28.746330 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:28.746643 kubelet[3178]: W0120 01:42:28.746343 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:28.746643 kubelet[3178]: E0120 01:42:28.746375 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:28.746926 kubelet[3178]: E0120 01:42:28.746814 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:28.746926 kubelet[3178]: W0120 01:42:28.746828 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:28.746926 kubelet[3178]: E0120 01:42:28.746854 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:28.747375 kubelet[3178]: E0120 01:42:28.747206 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:28.747375 kubelet[3178]: W0120 01:42:28.747219 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:28.747375 kubelet[3178]: E0120 01:42:28.747235 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:28.747576 kubelet[3178]: E0120 01:42:28.747559 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:28.747620 kubelet[3178]: W0120 01:42:28.747577 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:28.747620 kubelet[3178]: E0120 01:42:28.747593 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:28.747882 kubelet[3178]: E0120 01:42:28.747865 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:28.747882 kubelet[3178]: W0120 01:42:28.747880 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:28.748071 kubelet[3178]: E0120 01:42:28.747895 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:28.748322 kubelet[3178]: E0120 01:42:28.748303 3178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:42:28.748322 kubelet[3178]: W0120 01:42:28.748319 3178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:42:28.748404 kubelet[3178]: E0120 01:42:28.748334 3178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:42:28.750754 systemd[1]: cri-containerd-9c937ee24f50bec05c670213817a0dcc9a1bf8057593811bcb5909c1f9eef3e4.scope: Deactivated successfully. Jan 20 01:42:28.770831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c937ee24f50bec05c670213817a0dcc9a1bf8057593811bcb5909c1f9eef3e4-rootfs.mount: Deactivated successfully. Jan 20 01:42:29.568946 kubelet[3178]: E0120 01:42:29.568659 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8z22" podUID="e68b55a2-bd34-4f7b-b8c9-be9ad16a2026" Jan 20 01:42:29.663084 kubelet[3178]: I0120 01:42:29.663032 3178 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-66c9f889cb-kmwjm" podStartSLOduration=2.880081264 podStartE2EDuration="4.663013873s" podCreationTimestamp="2026-01-20 01:42:25 +0000 UTC" firstStartedPulling="2026-01-20 01:42:25.629537874 +0000 UTC m=+25.163487123" lastFinishedPulling="2026-01-20 01:42:27.412470483 +0000 UTC m=+26.946419732" observedRunningTime="2026-01-20 01:42:27.664589001 +0000 UTC m=+27.198538250" watchObservedRunningTime="2026-01-20 01:42:29.663013873 +0000 UTC m=+29.196963122" Jan 20 01:42:29.782633 containerd[1767]: time="2026-01-20T01:42:29.782566992Z" level=info msg="shim disconnected" id=9c937ee24f50bec05c670213817a0dcc9a1bf8057593811bcb5909c1f9eef3e4 namespace=k8s.io Jan 20 01:42:29.782633 containerd[1767]: time="2026-01-20T01:42:29.782627192Z" level=warning msg="cleaning up after shim disconnected" id=9c937ee24f50bec05c670213817a0dcc9a1bf8057593811bcb5909c1f9eef3e4 namespace=k8s.io Jan 20 01:42:29.782633 containerd[1767]: time="2026-01-20T01:42:29.782636192Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 01:42:30.657620 containerd[1767]: time="2026-01-20T01:42:30.657009228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 20 01:42:31.568927 kubelet[3178]: E0120 01:42:31.568846 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8z22" podUID="e68b55a2-bd34-4f7b-b8c9-be9ad16a2026" Jan 20 01:42:32.723649 containerd[1767]: time="2026-01-20T01:42:32.723584619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:32.725374 containerd[1767]: time="2026-01-20T01:42:32.725345779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 20 01:42:32.729044 containerd[1767]: time="2026-01-20T01:42:32.728743099Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:32.732480 containerd[1767]: time="2026-01-20T01:42:32.732446979Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:32.733303 containerd[1767]: time="2026-01-20T01:42:32.733273699Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.076226271s" Jan 20 01:42:32.733363 containerd[1767]: time="2026-01-20T01:42:32.733304139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 20 01:42:32.738832 containerd[1767]: time="2026-01-20T01:42:32.738721299Z" level=info msg="CreateContainer within sandbox \"a0e2730585597377a6ca6c9f9612a147c0ff88d6aa143f62b3cd5bdf7a78cde9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 01:42:32.773258 containerd[1767]: time="2026-01-20T01:42:32.773224579Z" level=info msg="CreateContainer within sandbox \"a0e2730585597377a6ca6c9f9612a147c0ff88d6aa143f62b3cd5bdf7a78cde9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"af0cb5fffa281818b55707c12dba0ff6d4ee108d554107ad027561a916610a76\"" Jan 20 01:42:32.775029 containerd[1767]: time="2026-01-20T01:42:32.774995379Z" level=info msg="StartContainer for \"af0cb5fffa281818b55707c12dba0ff6d4ee108d554107ad027561a916610a76\"" Jan 20 01:42:32.807044 systemd[1]: Started cri-containerd-af0cb5fffa281818b55707c12dba0ff6d4ee108d554107ad027561a916610a76.scope - libcontainer container af0cb5fffa281818b55707c12dba0ff6d4ee108d554107ad027561a916610a76. Jan 20 01:42:32.832190 containerd[1767]: time="2026-01-20T01:42:32.832153219Z" level=info msg="StartContainer for \"af0cb5fffa281818b55707c12dba0ff6d4ee108d554107ad027561a916610a76\" returns successfully" Jan 20 01:42:33.568461 kubelet[3178]: E0120 01:42:33.568405 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n8z22" podUID="e68b55a2-bd34-4f7b-b8c9-be9ad16a2026" Jan 20 01:42:33.942858 containerd[1767]: time="2026-01-20T01:42:33.942612134Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 01:42:33.945238 systemd[1]: cri-containerd-af0cb5fffa281818b55707c12dba0ff6d4ee108d554107ad027561a916610a76.scope: Deactivated successfully. Jan 20 01:42:33.965278 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af0cb5fffa281818b55707c12dba0ff6d4ee108d554107ad027561a916610a76-rootfs.mount: Deactivated successfully. Jan 20 01:42:33.991211 kubelet[3178]: I0120 01:42:33.991181 3178 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 20 01:42:34.360683 kubelet[3178]: W0120 01:42:34.061469 3178 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.6-n-0046389dc1" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.6-n-0046389dc1' and this object Jan 20 01:42:34.360683 kubelet[3178]: E0120 01:42:34.061504 3178 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081.3.6-n-0046389dc1\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081.3.6-n-0046389dc1' and this object" logger="UnhandledError" Jan 20 01:42:34.360683 kubelet[3178]: W0120 01:42:34.062172 3178 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4081.3.6-n-0046389dc1" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.6-n-0046389dc1' and this object Jan 20 01:42:34.360683 kubelet[3178]: E0120 01:42:34.062194 3178 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-4081.3.6-n-0046389dc1\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081.3.6-n-0046389dc1' and this object" logger="UnhandledError" Jan 20 01:42:34.042084 systemd[1]: Created slice kubepods-burstable-pod5b059e7e_b61a_45ee_b787_908d675a8c0c.slice - libcontainer container kubepods-burstable-pod5b059e7e_b61a_45ee_b787_908d675a8c0c.slice. Jan 20 01:42:34.360966 kubelet[3178]: I0120 01:42:34.077141 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/91ca927e-e136-46b4-98a3-df100d2c639a-whisker-backend-key-pair\") pod \"whisker-f5fc59cb9-tchkn\" (UID: \"91ca927e-e136-46b4-98a3-df100d2c639a\") " pod="calico-system/whisker-f5fc59cb9-tchkn" Jan 20 01:42:34.360966 kubelet[3178]: I0120 01:42:34.077182 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlrwp\" (UniqueName: \"kubernetes.io/projected/106a00f0-806f-4139-9f3e-5722fa42f199-kube-api-access-rlrwp\") pod \"calico-apiserver-9899c86f9-k8n9d\" (UID: \"106a00f0-806f-4139-9f3e-5722fa42f199\") " pod="calico-apiserver/calico-apiserver-9899c86f9-k8n9d" Jan 20 01:42:34.360966 kubelet[3178]: I0120 01:42:34.077205 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a0da74e-5da6-4d17-baef-898cc44d92e7-config\") pod \"goldmane-666569f655-bwmf9\" (UID: \"8a0da74e-5da6-4d17-baef-898cc44d92e7\") " pod="calico-system/goldmane-666569f655-bwmf9" Jan 20 01:42:34.360966 kubelet[3178]: I0120 01:42:34.077221 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0e8d57de-35ca-4ff1-828c-b0edcfa72a11-calico-apiserver-certs\") pod \"calico-apiserver-7d975bd6cf-mbc2x\" (UID: \"0e8d57de-35ca-4ff1-828c-b0edcfa72a11\") " pod="calico-apiserver/calico-apiserver-7d975bd6cf-mbc2x" Jan 20 01:42:34.360966 kubelet[3178]: I0120 01:42:34.077246 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0e893c04-8b87-47f0-b2a8-981971a5bfc3-calico-apiserver-certs\") pod \"calico-apiserver-9899c86f9-mvhh6\" (UID: \"0e893c04-8b87-47f0-b2a8-981971a5bfc3\") " pod="calico-apiserver/calico-apiserver-9899c86f9-mvhh6" Jan 20 01:42:34.051674 systemd[1]: Created slice kubepods-besteffort-podf24ea947_18f0_4003_bcc7_bb3d7376a6ba.slice - libcontainer container kubepods-besteffort-podf24ea947_18f0_4003_bcc7_bb3d7376a6ba.slice. Jan 20 01:42:34.361187 kubelet[3178]: I0120 01:42:34.077262 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnk6z\" (UniqueName: \"kubernetes.io/projected/5b059e7e-b61a-45ee-b787-908d675a8c0c-kube-api-access-nnk6z\") pod \"coredns-668d6bf9bc-5t5k4\" (UID: \"5b059e7e-b61a-45ee-b787-908d675a8c0c\") " pod="kube-system/coredns-668d6bf9bc-5t5k4" Jan 20 01:42:34.361187 kubelet[3178]: I0120 01:42:34.077279 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f24ea947-18f0-4003-bcc7-bb3d7376a6ba-tigera-ca-bundle\") pod \"calico-kube-controllers-cd848bd58-mqt54\" (UID: \"f24ea947-18f0-4003-bcc7-bb3d7376a6ba\") " pod="calico-system/calico-kube-controllers-cd848bd58-mqt54" Jan 20 01:42:34.361187 kubelet[3178]: I0120 01:42:34.077295 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n7lc\" (UniqueName: \"kubernetes.io/projected/f24ea947-18f0-4003-bcc7-bb3d7376a6ba-kube-api-access-7n7lc\") pod \"calico-kube-controllers-cd848bd58-mqt54\" (UID: \"f24ea947-18f0-4003-bcc7-bb3d7376a6ba\") " pod="calico-system/calico-kube-controllers-cd848bd58-mqt54" Jan 20 01:42:34.361187 kubelet[3178]: I0120 01:42:34.077320 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91ca927e-e136-46b4-98a3-df100d2c639a-whisker-ca-bundle\") pod \"whisker-f5fc59cb9-tchkn\" (UID: \"91ca927e-e136-46b4-98a3-df100d2c639a\") " pod="calico-system/whisker-f5fc59cb9-tchkn" Jan 20 01:42:34.361187 kubelet[3178]: I0120 01:42:34.077339 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh4m7\" (UniqueName: \"kubernetes.io/projected/0e893c04-8b87-47f0-b2a8-981971a5bfc3-kube-api-access-zh4m7\") pod \"calico-apiserver-9899c86f9-mvhh6\" (UID: \"0e893c04-8b87-47f0-b2a8-981971a5bfc3\") " pod="calico-apiserver/calico-apiserver-9899c86f9-mvhh6" Jan 20 01:42:34.065280 systemd[1]: Created slice kubepods-burstable-pod9033ed8a_4ce4_4c81_8671_cf1d75ad0bd7.slice - libcontainer container kubepods-burstable-pod9033ed8a_4ce4_4c81_8671_cf1d75ad0bd7.slice. Jan 20 01:42:34.361353 kubelet[3178]: I0120 01:42:34.077358 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b059e7e-b61a-45ee-b787-908d675a8c0c-config-volume\") pod \"coredns-668d6bf9bc-5t5k4\" (UID: \"5b059e7e-b61a-45ee-b787-908d675a8c0c\") " pod="kube-system/coredns-668d6bf9bc-5t5k4" Jan 20 01:42:34.361353 kubelet[3178]: I0120 01:42:34.077375 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8a0da74e-5da6-4d17-baef-898cc44d92e7-goldmane-key-pair\") pod \"goldmane-666569f655-bwmf9\" (UID: \"8a0da74e-5da6-4d17-baef-898cc44d92e7\") " pod="calico-system/goldmane-666569f655-bwmf9" Jan 20 01:42:34.361353 kubelet[3178]: I0120 01:42:34.077402 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a0da74e-5da6-4d17-baef-898cc44d92e7-goldmane-ca-bundle\") pod \"goldmane-666569f655-bwmf9\" (UID: \"8a0da74e-5da6-4d17-baef-898cc44d92e7\") " pod="calico-system/goldmane-666569f655-bwmf9" Jan 20 01:42:34.361353 kubelet[3178]: I0120 01:42:34.077420 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/106a00f0-806f-4139-9f3e-5722fa42f199-calico-apiserver-certs\") pod \"calico-apiserver-9899c86f9-k8n9d\" (UID: \"106a00f0-806f-4139-9f3e-5722fa42f199\") " pod="calico-apiserver/calico-apiserver-9899c86f9-k8n9d" Jan 20 01:42:34.361353 kubelet[3178]: I0120 01:42:34.077437 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27gzs\" (UniqueName: \"kubernetes.io/projected/0e8d57de-35ca-4ff1-828c-b0edcfa72a11-kube-api-access-27gzs\") pod \"calico-apiserver-7d975bd6cf-mbc2x\" (UID: \"0e8d57de-35ca-4ff1-828c-b0edcfa72a11\") " pod="calico-apiserver/calico-apiserver-7d975bd6cf-mbc2x" Jan 20 01:42:34.074685 systemd[1]: Created slice kubepods-besteffort-pod106a00f0_806f_4139_9f3e_5722fa42f199.slice - libcontainer container kubepods-besteffort-pod106a00f0_806f_4139_9f3e_5722fa42f199.slice. Jan 20 01:42:34.361516 kubelet[3178]: I0120 01:42:34.077456 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrcnq\" (UniqueName: \"kubernetes.io/projected/9033ed8a-4ce4-4c81-8671-cf1d75ad0bd7-kube-api-access-xrcnq\") pod \"coredns-668d6bf9bc-jj6z8\" (UID: \"9033ed8a-4ce4-4c81-8671-cf1d75ad0bd7\") " pod="kube-system/coredns-668d6bf9bc-jj6z8" Jan 20 01:42:34.361516 kubelet[3178]: I0120 01:42:34.077482 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2qdf\" (UniqueName: \"kubernetes.io/projected/91ca927e-e136-46b4-98a3-df100d2c639a-kube-api-access-j2qdf\") pod \"whisker-f5fc59cb9-tchkn\" (UID: \"91ca927e-e136-46b4-98a3-df100d2c639a\") " pod="calico-system/whisker-f5fc59cb9-tchkn" Jan 20 01:42:34.361516 kubelet[3178]: I0120 01:42:34.077503 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nmdq\" (UniqueName: \"kubernetes.io/projected/8a0da74e-5da6-4d17-baef-898cc44d92e7-kube-api-access-7nmdq\") pod \"goldmane-666569f655-bwmf9\" (UID: \"8a0da74e-5da6-4d17-baef-898cc44d92e7\") " pod="calico-system/goldmane-666569f655-bwmf9" Jan 20 01:42:34.361516 kubelet[3178]: I0120 01:42:34.077521 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9033ed8a-4ce4-4c81-8671-cf1d75ad0bd7-config-volume\") pod \"coredns-668d6bf9bc-jj6z8\" (UID: \"9033ed8a-4ce4-4c81-8671-cf1d75ad0bd7\") " pod="kube-system/coredns-668d6bf9bc-jj6z8" Jan 20 01:42:34.079465 systemd[1]: Created slice kubepods-besteffort-pod0e893c04_8b87_47f0_b2a8_981971a5bfc3.slice - libcontainer container kubepods-besteffort-pod0e893c04_8b87_47f0_b2a8_981971a5bfc3.slice. Jan 20 01:42:34.086203 systemd[1]: Created slice kubepods-besteffort-pod91ca927e_e136_46b4_98a3_df100d2c639a.slice - libcontainer container kubepods-besteffort-pod91ca927e_e136_46b4_98a3_df100d2c639a.slice. Jan 20 01:42:34.094386 systemd[1]: Created slice kubepods-besteffort-pod0e8d57de_35ca_4ff1_828c_b0edcfa72a11.slice - libcontainer container kubepods-besteffort-pod0e8d57de_35ca_4ff1_828c_b0edcfa72a11.slice. Jan 20 01:42:34.104566 systemd[1]: Created slice kubepods-besteffort-pod8a0da74e_5da6_4d17_baef_898cc44d92e7.slice - libcontainer container kubepods-besteffort-pod8a0da74e_5da6_4d17_baef_898cc44d92e7.slice. Jan 20 01:42:34.661866 containerd[1767]: time="2026-01-20T01:42:34.661759195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5t5k4,Uid:5b059e7e-b61a-45ee-b787-908d675a8c0c,Namespace:kube-system,Attempt:0,}" Jan 20 01:42:34.675396 containerd[1767]: time="2026-01-20T01:42:34.675364752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f5fc59cb9-tchkn,Uid:91ca927e-e136-46b4-98a3-df100d2c639a,Namespace:calico-system,Attempt:0,}" Jan 20 01:42:34.681509 containerd[1767]: time="2026-01-20T01:42:34.681336190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bwmf9,Uid:8a0da74e-5da6-4d17-baef-898cc44d92e7,Namespace:calico-system,Attempt:0,}" Jan 20 01:42:34.681785 containerd[1767]: time="2026-01-20T01:42:34.681761150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd848bd58-mqt54,Uid:f24ea947-18f0-4003-bcc7-bb3d7376a6ba,Namespace:calico-system,Attempt:0,}" Jan 20 01:42:34.681946 containerd[1767]: time="2026-01-20T01:42:34.681825110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jj6z8,Uid:9033ed8a-4ce4-4c81-8671-cf1d75ad0bd7,Namespace:kube-system,Attempt:0,}" Jan 20 01:42:34.827373 containerd[1767]: time="2026-01-20T01:42:34.827319077Z" level=info msg="shim disconnected" id=af0cb5fffa281818b55707c12dba0ff6d4ee108d554107ad027561a916610a76 namespace=k8s.io Jan 20 01:42:34.827373 containerd[1767]: time="2026-01-20T01:42:34.827368437Z" level=warning msg="cleaning up after shim disconnected" id=af0cb5fffa281818b55707c12dba0ff6d4ee108d554107ad027561a916610a76 namespace=k8s.io Jan 20 01:42:34.827373 containerd[1767]: time="2026-01-20T01:42:34.827379157Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 01:42:35.113884 containerd[1767]: time="2026-01-20T01:42:35.113787332Z" level=error msg="Failed to destroy network for sandbox \"8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:35.115461 containerd[1767]: time="2026-01-20T01:42:35.115204772Z" level=error msg="Failed to destroy network for sandbox \"216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:35.115576 containerd[1767]: time="2026-01-20T01:42:35.115552491Z" level=error msg="encountered an error cleaning up failed sandbox \"216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:35.115693 containerd[1767]: time="2026-01-20T01:42:35.115665851Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jj6z8,Uid:9033ed8a-4ce4-4c81-8671-cf1d75ad0bd7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:35.117332 kubelet[3178]: E0120 01:42:35.116237 3178 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:35.117332 kubelet[3178]: E0120 01:42:35.116310 3178 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jj6z8" Jan 20 01:42:35.117332 kubelet[3178]: E0120 01:42:35.116330 3178 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jj6z8" Jan 20 01:42:35.117652 kubelet[3178]: E0120 01:42:35.116370 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jj6z8_kube-system(9033ed8a-4ce4-4c81-8671-cf1d75ad0bd7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jj6z8_kube-system(9033ed8a-4ce4-4c81-8671-cf1d75ad0bd7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jj6z8" podUID="9033ed8a-4ce4-4c81-8671-cf1d75ad0bd7" Jan 20 01:42:35.118648 containerd[1767]: time="2026-01-20T01:42:35.118531011Z" level=error msg="encountered an error cleaning up failed sandbox \"8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:35.118781 containerd[1767]: time="2026-01-20T01:42:35.118759251Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5t5k4,Uid:5b059e7e-b61a-45ee-b787-908d675a8c0c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:35.119052 kubelet[3178]: E0120 01:42:35.119022 3178 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:35.119257 kubelet[3178]: E0120 01:42:35.119151 3178 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5t5k4" Jan 20 01:42:35.119257 kubelet[3178]: E0120 01:42:35.119171 3178 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5t5k4" Jan 20 01:42:35.119257 kubelet[3178]: E0120 01:42:35.119208 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-5t5k4_kube-system(5b059e7e-b61a-45ee-b787-908d675a8c0c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-5t5k4_kube-system(5b059e7e-b61a-45ee-b787-908d675a8c0c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5t5k4" podUID="5b059e7e-b61a-45ee-b787-908d675a8c0c" Jan 20 01:42:35.125853 containerd[1767]: time="2026-01-20T01:42:35.125642769Z" level=error msg="Failed to destroy network for sandbox \"ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:35.126151 containerd[1767]: time="2026-01-20T01:42:35.125785649Z" level=error msg="Failed to destroy network for sandbox \"35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:35.126724 containerd[1767]: time="2026-01-20T01:42:35.126615409Z" level=error msg="encountered an error cleaning up failed sandbox \"ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:35.126724 containerd[1767]: time="2026-01-20T01:42:35.126657649Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f5fc59cb9-tchkn,Uid:91ca927e-e136-46b4-98a3-df100d2c639a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:35.127497 kubelet[3178]: E0120 01:42:35.126876 3178 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:35.127497 kubelet[3178]: E0120 01:42:35.127030 3178 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f5fc59cb9-tchkn" Jan 20 01:42:35.127497 kubelet[3178]: E0120 01:42:35.127051 3178 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-f5fc59cb9-tchkn" Jan 20 01:42:35.127673 kubelet[3178]: E0120 01:42:35.127085 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-f5fc59cb9-tchkn_calico-system(91ca927e-e136-46b4-98a3-df100d2c639a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-f5fc59cb9-tchkn_calico-system(91ca927e-e136-46b4-98a3-df100d2c639a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-f5fc59cb9-tchkn" podUID="91ca927e-e136-46b4-98a3-df100d2c639a" Jan 20 01:42:35.129347 containerd[1767]: time="2026-01-20T01:42:35.129099008Z" level=error msg="encountered an error cleaning up failed sandbox \"35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:35.129715 containerd[1767]: time="2026-01-20T01:42:35.129610968Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd848bd58-mqt54,Uid:f24ea947-18f0-4003-bcc7-bb3d7376a6ba,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:35.130018 kubelet[3178]: E0120 01:42:35.129893 3178 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:35.130018 kubelet[3178]: E0120 01:42:35.129979 3178 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cd848bd58-mqt54" Jan 20 01:42:35.130018 kubelet[3178]: E0120 01:42:35.129995 3178 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cd848bd58-mqt54" Jan 20 01:42:35.130334 containerd[1767]: time="2026-01-20T01:42:35.130172728Z" level=error msg="Failed to destroy network for sandbox \"afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:35.130389 kubelet[3178]: E0120 01:42:35.130271 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cd848bd58-mqt54_calico-system(f24ea947-18f0-4003-bcc7-bb3d7376a6ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cd848bd58-mqt54_calico-system(f24ea947-18f0-4003-bcc7-bb3d7376a6ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cd848bd58-mqt54" podUID="f24ea947-18f0-4003-bcc7-bb3d7376a6ba" Jan 20 01:42:35.130859 containerd[1767]: time="2026-01-20T01:42:35.130594128Z" level=error msg="encountered an error cleaning up failed sandbox \"afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:35.130859 containerd[1767]: time="2026-01-20T01:42:35.130633408Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bwmf9,Uid:8a0da74e-5da6-4d17-baef-898cc44d92e7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:35.130983 kubelet[3178]: E0120 01:42:35.130754 3178 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:35.130983 kubelet[3178]: E0120 01:42:35.130782 3178 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-bwmf9" Jan 20 01:42:35.130983 kubelet[3178]: E0120 01:42:35.130799 3178 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-bwmf9" Jan 20 01:42:35.131061 kubelet[3178]: E0120 01:42:35.130822 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-bwmf9_calico-system(8a0da74e-5da6-4d17-baef-898cc44d92e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-bwmf9_calico-system(8a0da74e-5da6-4d17-baef-898cc44d92e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-bwmf9" podUID="8a0da74e-5da6-4d17-baef-898cc44d92e7" Jan 20 01:42:35.179752 kubelet[3178]: E0120 01:42:35.179236 3178 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jan 20 01:42:35.179752 kubelet[3178]: E0120 01:42:35.179309 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/106a00f0-806f-4139-9f3e-5722fa42f199-calico-apiserver-certs podName:106a00f0-806f-4139-9f3e-5722fa42f199 nodeName:}" failed. No retries permitted until 2026-01-20 01:42:35.679288517 +0000 UTC m=+35.213237766 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/106a00f0-806f-4139-9f3e-5722fa42f199-calico-apiserver-certs") pod "calico-apiserver-9899c86f9-k8n9d" (UID: "106a00f0-806f-4139-9f3e-5722fa42f199") : failed to sync secret cache: timed out waiting for the condition Jan 20 01:42:35.179752 kubelet[3178]: E0120 01:42:35.179595 3178 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jan 20 01:42:35.179752 kubelet[3178]: E0120 01:42:35.179671 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e8d57de-35ca-4ff1-828c-b0edcfa72a11-calico-apiserver-certs podName:0e8d57de-35ca-4ff1-828c-b0edcfa72a11 nodeName:}" failed. No retries permitted until 2026-01-20 01:42:35.679657997 +0000 UTC m=+35.213607246 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/0e8d57de-35ca-4ff1-828c-b0edcfa72a11-calico-apiserver-certs") pod "calico-apiserver-7d975bd6cf-mbc2x" (UID: "0e8d57de-35ca-4ff1-828c-b0edcfa72a11") : failed to sync secret cache: timed out waiting for the condition Jan 20 01:42:35.179752 kubelet[3178]: E0120 01:42:35.179690 3178 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jan 20 01:42:35.180003 kubelet[3178]: E0120 01:42:35.179712 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e893c04-8b87-47f0-b2a8-981971a5bfc3-calico-apiserver-certs podName:0e893c04-8b87-47f0-b2a8-981971a5bfc3 nodeName:}" failed. No retries permitted until 2026-01-20 01:42:35.679703477 +0000 UTC m=+35.213652726 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/0e893c04-8b87-47f0-b2a8-981971a5bfc3-calico-apiserver-certs") pod "calico-apiserver-9899c86f9-mvhh6" (UID: "0e893c04-8b87-47f0-b2a8-981971a5bfc3") : failed to sync secret cache: timed out waiting for the condition Jan 20 01:42:35.190119 kubelet[3178]: E0120 01:42:35.190096 3178 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 20 01:42:35.190119 kubelet[3178]: E0120 01:42:35.190117 3178 projected.go:194] Error preparing data for projected volume kube-api-access-rlrwp for pod calico-apiserver/calico-apiserver-9899c86f9-k8n9d: failed to sync configmap cache: timed out waiting for the condition Jan 20 01:42:35.190226 kubelet[3178]: E0120 01:42:35.190153 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/106a00f0-806f-4139-9f3e-5722fa42f199-kube-api-access-rlrwp podName:106a00f0-806f-4139-9f3e-5722fa42f199 nodeName:}" failed. No retries permitted until 2026-01-20 01:42:35.690142354 +0000 UTC m=+35.224091603 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rlrwp" (UniqueName: "kubernetes.io/projected/106a00f0-806f-4139-9f3e-5722fa42f199-kube-api-access-rlrwp") pod "calico-apiserver-9899c86f9-k8n9d" (UID: "106a00f0-806f-4139-9f3e-5722fa42f199") : failed to sync configmap cache: timed out waiting for the condition Jan 20 01:42:35.207886 kubelet[3178]: E0120 01:42:35.207862 3178 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 20 01:42:35.207886 kubelet[3178]: E0120 01:42:35.207887 3178 projected.go:194] Error preparing data for projected volume kube-api-access-27gzs for pod calico-apiserver/calico-apiserver-7d975bd6cf-mbc2x: failed to sync configmap cache: timed out waiting for the condition Jan 20 01:42:35.208006 kubelet[3178]: E0120 01:42:35.207930 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0e8d57de-35ca-4ff1-828c-b0edcfa72a11-kube-api-access-27gzs podName:0e8d57de-35ca-4ff1-828c-b0edcfa72a11 nodeName:}" failed. No retries permitted until 2026-01-20 01:42:35.70791739 +0000 UTC m=+35.241866639 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-27gzs" (UniqueName: "kubernetes.io/projected/0e8d57de-35ca-4ff1-828c-b0edcfa72a11-kube-api-access-27gzs") pod "calico-apiserver-7d975bd6cf-mbc2x" (UID: "0e8d57de-35ca-4ff1-828c-b0edcfa72a11") : failed to sync configmap cache: timed out waiting for the condition Jan 20 01:42:35.213654 kubelet[3178]: E0120 01:42:35.213628 3178 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 20 01:42:35.213654 kubelet[3178]: E0120 01:42:35.213652 3178 projected.go:194] Error preparing data for projected volume kube-api-access-zh4m7 for pod calico-apiserver/calico-apiserver-9899c86f9-mvhh6: failed to sync configmap cache: timed out waiting for the condition Jan 20 01:42:35.213772 kubelet[3178]: E0120 01:42:35.213688 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0e893c04-8b87-47f0-b2a8-981971a5bfc3-kube-api-access-zh4m7 podName:0e893c04-8b87-47f0-b2a8-981971a5bfc3 nodeName:}" failed. No retries permitted until 2026-01-20 01:42:35.713676989 +0000 UTC m=+35.247626238 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zh4m7" (UniqueName: "kubernetes.io/projected/0e893c04-8b87-47f0-b2a8-981971a5bfc3-kube-api-access-zh4m7") pod "calico-apiserver-9899c86f9-mvhh6" (UID: "0e893c04-8b87-47f0-b2a8-981971a5bfc3") : failed to sync configmap cache: timed out waiting for the condition Jan 20 01:42:35.578808 systemd[1]: Created slice kubepods-besteffort-pode68b55a2_bd34_4f7b_b8c9_be9ad16a2026.slice - libcontainer container kubepods-besteffort-pode68b55a2_bd34_4f7b_b8c9_be9ad16a2026.slice. Jan 20 01:42:35.580992 containerd[1767]: time="2026-01-20T01:42:35.580955905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n8z22,Uid:e68b55a2-bd34-4f7b-b8c9-be9ad16a2026,Namespace:calico-system,Attempt:0,}" Jan 20 01:42:35.640285 containerd[1767]: time="2026-01-20T01:42:35.640230132Z" level=error msg="Failed to destroy network for sandbox \"c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:35.640562 containerd[1767]: time="2026-01-20T01:42:35.640537452Z" level=error msg="encountered an error cleaning up failed sandbox \"c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:35.640628 containerd[1767]: time="2026-01-20T01:42:35.640595972Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n8z22,Uid:e68b55a2-bd34-4f7b-b8c9-be9ad16a2026,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:35.641035 kubelet[3178]: E0120 01:42:35.640821 3178 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:35.641035 kubelet[3178]: E0120 01:42:35.640890 3178 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n8z22" Jan 20 01:42:35.641035 kubelet[3178]: E0120 01:42:35.640930 3178 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n8z22" Jan 20 01:42:35.641171 kubelet[3178]: E0120 01:42:35.640975 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n8z22_calico-system(e68b55a2-bd34-4f7b-b8c9-be9ad16a2026)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n8z22_calico-system(e68b55a2-bd34-4f7b-b8c9-be9ad16a2026)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n8z22" podUID="e68b55a2-bd34-4f7b-b8c9-be9ad16a2026" Jan 20 01:42:35.664672 containerd[1767]: time="2026-01-20T01:42:35.664640806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 20 01:42:35.665384 kubelet[3178]: I0120 01:42:35.665355 3178 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" Jan 20 01:42:35.666564 containerd[1767]: time="2026-01-20T01:42:35.666160566Z" level=info msg="StopPodSandbox for \"216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b\"" Jan 20 01:42:35.666564 containerd[1767]: time="2026-01-20T01:42:35.666335206Z" level=info msg="Ensure that sandbox 216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b in task-service has been cleanup successfully" Jan 20 01:42:35.669475 kubelet[3178]: I0120 01:42:35.669453 3178 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" Jan 20 01:42:35.669872 containerd[1767]: time="2026-01-20T01:42:35.669843805Z" level=info msg="StopPodSandbox for \"ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0\"" Jan 20 01:42:35.670015 containerd[1767]: time="2026-01-20T01:42:35.669996485Z" level=info msg="Ensure that sandbox ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0 in task-service has been cleanup successfully" Jan 20 01:42:35.676922 kubelet[3178]: I0120 01:42:35.674592 3178 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" Jan 20 01:42:35.678456 containerd[1767]: time="2026-01-20T01:42:35.678115083Z" level=info msg="StopPodSandbox for \"c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f\"" Jan 20 01:42:35.679819 containerd[1767]: time="2026-01-20T01:42:35.679028723Z" level=info msg="Ensure that sandbox c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f in task-service has been cleanup successfully" Jan 20 01:42:35.680011 kubelet[3178]: I0120 01:42:35.679971 3178 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" Jan 20 01:42:35.680818 containerd[1767]: time="2026-01-20T01:42:35.680385403Z" level=info msg="StopPodSandbox for \"35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809\"" Jan 20 01:42:35.683756 containerd[1767]: time="2026-01-20T01:42:35.683529002Z" level=info msg="Ensure that sandbox 35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809 in task-service has been cleanup successfully" Jan 20 01:42:35.685806 kubelet[3178]: I0120 01:42:35.685631 3178 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" Jan 20 01:42:35.686249 containerd[1767]: time="2026-01-20T01:42:35.686223521Z" level=info msg="StopPodSandbox for \"afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504\"" Jan 20 01:42:35.686614 containerd[1767]: time="2026-01-20T01:42:35.686590561Z" level=info msg="Ensure that sandbox afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504 in task-service has been cleanup successfully" Jan 20 01:42:35.687764 kubelet[3178]: I0120 01:42:35.687740 3178 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" Jan 20 01:42:35.708878 containerd[1767]: time="2026-01-20T01:42:35.708743036Z" level=info msg="StopPodSandbox for \"8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7\"" Jan 20 01:42:35.710765 containerd[1767]: time="2026-01-20T01:42:35.710543076Z" level=info msg="Ensure that sandbox 8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7 in task-service has been cleanup successfully" Jan 20 01:42:35.754731 containerd[1767]: time="2026-01-20T01:42:35.754681466Z" level=error msg="StopPodSandbox for \"ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0\" failed" error="failed to destroy network for sandbox \"ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:35.754945 kubelet[3178]: E0120 01:42:35.754909 3178 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" Jan 20 01:42:35.755735 kubelet[3178]: E0120 01:42:35.754972 3178 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0"} Jan 20 01:42:35.755735 kubelet[3178]: E0120 01:42:35.755031 3178 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91ca927e-e136-46b4-98a3-df100d2c639a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:42:35.755735 kubelet[3178]: E0120 01:42:35.755052 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91ca927e-e136-46b4-98a3-df100d2c639a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-f5fc59cb9-tchkn" podUID="91ca927e-e136-46b4-98a3-df100d2c639a" Jan 20 01:42:35.764031 containerd[1767]: time="2026-01-20T01:42:35.763989064Z" level=error msg="StopPodSandbox for \"afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504\" failed" error="failed to destroy network for sandbox \"afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:35.764779 containerd[1767]: time="2026-01-20T01:42:35.764454944Z" level=error msg="StopPodSandbox for \"216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b\" failed" error="failed to destroy network for sandbox \"216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:35.764941 kubelet[3178]: E0120 01:42:35.764870 3178 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" Jan 20 01:42:35.764941 kubelet[3178]: E0120 01:42:35.764931 3178 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504"} Jan 20 01:42:35.765077 kubelet[3178]: E0120 01:42:35.764962 3178 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8a0da74e-5da6-4d17-baef-898cc44d92e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:42:35.765077 kubelet[3178]: E0120 01:42:35.764983 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8a0da74e-5da6-4d17-baef-898cc44d92e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-bwmf9" podUID="8a0da74e-5da6-4d17-baef-898cc44d92e7" Jan 20 01:42:35.765309 kubelet[3178]: E0120 01:42:35.765266 3178 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" Jan 20 01:42:35.765309 kubelet[3178]: E0120 01:42:35.765294 3178 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b"} Jan 20 01:42:35.765393 kubelet[3178]: E0120 01:42:35.765343 3178 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9033ed8a-4ce4-4c81-8671-cf1d75ad0bd7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:42:35.765393 kubelet[3178]: E0120 01:42:35.765362 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9033ed8a-4ce4-4c81-8671-cf1d75ad0bd7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jj6z8" podUID="9033ed8a-4ce4-4c81-8671-cf1d75ad0bd7" Jan 20 01:42:35.771705 containerd[1767]: time="2026-01-20T01:42:35.771667822Z" level=error msg="StopPodSandbox for \"c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f\" failed" error="failed to destroy network for sandbox \"c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:35.772924 kubelet[3178]: E0120 01:42:35.772183 3178 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" Jan 20 01:42:35.772924 kubelet[3178]: E0120 01:42:35.772710 3178 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f"} Jan 20 01:42:35.772924 kubelet[3178]: E0120 01:42:35.772745 3178 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e68b55a2-bd34-4f7b-b8c9-be9ad16a2026\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:42:35.772924 kubelet[3178]: E0120 01:42:35.772765 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e68b55a2-bd34-4f7b-b8c9-be9ad16a2026\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n8z22" podUID="e68b55a2-bd34-4f7b-b8c9-be9ad16a2026" Jan 20 01:42:35.778078 containerd[1767]: time="2026-01-20T01:42:35.778013101Z" level=error msg="StopPodSandbox for \"35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809\" failed" error="failed to destroy network for sandbox \"35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:35.778234 kubelet[3178]: E0120 01:42:35.778147 3178 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" Jan 20 01:42:35.778234 kubelet[3178]: E0120 01:42:35.778180 3178 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809"} Jan 20 01:42:35.778234 kubelet[3178]: E0120 01:42:35.778202 3178 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f24ea947-18f0-4003-bcc7-bb3d7376a6ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:42:35.778234 kubelet[3178]: E0120 01:42:35.778220 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f24ea947-18f0-4003-bcc7-bb3d7376a6ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cd848bd58-mqt54" podUID="f24ea947-18f0-4003-bcc7-bb3d7376a6ba" Jan 20 01:42:35.778452 containerd[1767]: time="2026-01-20T01:42:35.778342180Z" level=error msg="StopPodSandbox for \"8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7\" failed" error="failed to destroy network for sandbox \"8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:35.778486 kubelet[3178]: E0120 01:42:35.778459 3178 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" Jan 20 01:42:35.778514 kubelet[3178]: E0120 01:42:35.778484 3178 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7"} Jan 20 01:42:35.778514 kubelet[3178]: E0120 01:42:35.778505 3178 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5b059e7e-b61a-45ee-b787-908d675a8c0c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:42:35.778599 kubelet[3178]: E0120 01:42:35.778524 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5b059e7e-b61a-45ee-b787-908d675a8c0c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5t5k4" podUID="5b059e7e-b61a-45ee-b787-908d675a8c0c" Jan 20 01:42:35.870449 containerd[1767]: time="2026-01-20T01:42:35.870341560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d975bd6cf-mbc2x,Uid:0e8d57de-35ca-4ff1-828c-b0edcfa72a11,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:42:35.875557 containerd[1767]: time="2026-01-20T01:42:35.875325238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9899c86f9-mvhh6,Uid:0e893c04-8b87-47f0-b2a8-981971a5bfc3,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:42:35.882223 containerd[1767]: time="2026-01-20T01:42:35.882195037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9899c86f9-k8n9d,Uid:106a00f0-806f-4139-9f3e-5722fa42f199,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:42:35.969080 containerd[1767]: time="2026-01-20T01:42:35.968964977Z" level=error msg="Failed to destroy network for sandbox \"54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:35.969940 containerd[1767]: time="2026-01-20T01:42:35.969426937Z" level=error msg="encountered an error cleaning up failed sandbox \"54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:35.970092 containerd[1767]: time="2026-01-20T01:42:35.969482177Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d975bd6cf-mbc2x,Uid:0e8d57de-35ca-4ff1-828c-b0edcfa72a11,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:35.972014 kubelet[3178]: E0120 01:42:35.970776 3178 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:35.972014 kubelet[3178]: E0120 01:42:35.970837 3178 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d975bd6cf-mbc2x" Jan 20 01:42:35.972014 kubelet[3178]: E0120 01:42:35.970855 3178 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d975bd6cf-mbc2x" Jan 20 01:42:35.972160 kubelet[3178]: E0120 01:42:35.970891 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d975bd6cf-mbc2x_calico-apiserver(0e8d57de-35ca-4ff1-828c-b0edcfa72a11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d975bd6cf-mbc2x_calico-apiserver(0e8d57de-35ca-4ff1-828c-b0edcfa72a11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d975bd6cf-mbc2x" podUID="0e8d57de-35ca-4ff1-828c-b0edcfa72a11" Jan 20 01:42:35.977374 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809-shm.mount: Deactivated successfully. Jan 20 01:42:35.977595 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b-shm.mount: Deactivated successfully. Jan 20 01:42:35.977738 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504-shm.mount: Deactivated successfully. Jan 20 01:42:35.977789 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7-shm.mount: Deactivated successfully. Jan 20 01:42:35.977839 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0-shm.mount: Deactivated successfully. Jan 20 01:42:35.982528 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421-shm.mount: Deactivated successfully. Jan 20 01:42:36.017736 containerd[1767]: time="2026-01-20T01:42:36.017638806Z" level=error msg="Failed to destroy network for sandbox \"f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:36.019822 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a-shm.mount: Deactivated successfully. Jan 20 01:42:36.021298 containerd[1767]: time="2026-01-20T01:42:36.021156125Z" level=error msg="encountered an error cleaning up failed sandbox \"f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:36.021298 containerd[1767]: time="2026-01-20T01:42:36.021216005Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9899c86f9-mvhh6,Uid:0e893c04-8b87-47f0-b2a8-981971a5bfc3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:36.022913 kubelet[3178]: E0120 01:42:36.021682 3178 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:36.022913 kubelet[3178]: E0120 01:42:36.021743 3178 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9899c86f9-mvhh6" Jan 20 01:42:36.022913 kubelet[3178]: E0120 01:42:36.021761 3178 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9899c86f9-mvhh6" Jan 20 01:42:36.023056 kubelet[3178]: E0120 01:42:36.021804 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9899c86f9-mvhh6_calico-apiserver(0e893c04-8b87-47f0-b2a8-981971a5bfc3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9899c86f9-mvhh6_calico-apiserver(0e893c04-8b87-47f0-b2a8-981971a5bfc3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9899c86f9-mvhh6" podUID="0e893c04-8b87-47f0-b2a8-981971a5bfc3" Jan 20 01:42:36.025008 containerd[1767]: time="2026-01-20T01:42:36.024976284Z" level=error msg="Failed to destroy network for sandbox \"86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:36.025447 containerd[1767]: time="2026-01-20T01:42:36.025419804Z" level=error msg="encountered an error cleaning up failed sandbox \"86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:36.025596 containerd[1767]: time="2026-01-20T01:42:36.025548804Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9899c86f9-k8n9d,Uid:106a00f0-806f-4139-9f3e-5722fa42f199,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:36.026498 kubelet[3178]: E0120 01:42:36.026306 3178 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:36.026498 kubelet[3178]: E0120 01:42:36.026344 3178 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9899c86f9-k8n9d" Jan 20 01:42:36.026498 kubelet[3178]: E0120 01:42:36.026360 3178 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9899c86f9-k8n9d" Jan 20 01:42:36.026621 kubelet[3178]: E0120 01:42:36.026389 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9899c86f9-k8n9d_calico-apiserver(106a00f0-806f-4139-9f3e-5722fa42f199)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9899c86f9-k8n9d_calico-apiserver(106a00f0-806f-4139-9f3e-5722fa42f199)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9899c86f9-k8n9d" podUID="106a00f0-806f-4139-9f3e-5722fa42f199" Jan 20 01:42:36.028746 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570-shm.mount: Deactivated successfully. Jan 20 01:42:36.692371 kubelet[3178]: I0120 01:42:36.692296 3178 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" Jan 20 01:42:36.694389 containerd[1767]: time="2026-01-20T01:42:36.693763732Z" level=info msg="StopPodSandbox for \"54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421\"" Jan 20 01:42:36.694389 containerd[1767]: time="2026-01-20T01:42:36.693928132Z" level=info msg="Ensure that sandbox 54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421 in task-service has been cleanup successfully" Jan 20 01:42:36.698408 kubelet[3178]: I0120 01:42:36.697259 3178 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" Jan 20 01:42:36.698489 containerd[1767]: time="2026-01-20T01:42:36.698123771Z" level=info msg="StopPodSandbox for \"f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a\"" Jan 20 01:42:36.698727 containerd[1767]: time="2026-01-20T01:42:36.698591051Z" level=info msg="Ensure that sandbox f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a in task-service has been cleanup successfully" Jan 20 01:42:36.701404 kubelet[3178]: I0120 01:42:36.701323 3178 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" Jan 20 01:42:36.703173 containerd[1767]: time="2026-01-20T01:42:36.703054530Z" level=info msg="StopPodSandbox for \"86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570\"" Jan 20 01:42:36.703836 containerd[1767]: time="2026-01-20T01:42:36.703803730Z" level=info msg="Ensure that sandbox 86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570 in task-service has been cleanup successfully" Jan 20 01:42:36.754546 containerd[1767]: time="2026-01-20T01:42:36.754125438Z" level=error msg="StopPodSandbox for \"86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570\" failed" error="failed to destroy network for sandbox \"86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:36.754678 kubelet[3178]: E0120 01:42:36.754403 3178 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" Jan 20 01:42:36.754678 kubelet[3178]: E0120 01:42:36.754449 3178 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570"} Jan 20 01:42:36.754678 kubelet[3178]: E0120 01:42:36.754481 3178 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"106a00f0-806f-4139-9f3e-5722fa42f199\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:42:36.754678 kubelet[3178]: E0120 01:42:36.754505 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"106a00f0-806f-4139-9f3e-5722fa42f199\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9899c86f9-k8n9d" podUID="106a00f0-806f-4139-9f3e-5722fa42f199" Jan 20 01:42:36.756889 containerd[1767]: time="2026-01-20T01:42:36.755996798Z" level=error msg="StopPodSandbox for \"f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a\" failed" error="failed to destroy network for sandbox \"f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:36.756996 kubelet[3178]: E0120 01:42:36.756629 3178 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" Jan 20 01:42:36.756996 kubelet[3178]: E0120 01:42:36.756746 3178 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a"} Jan 20 01:42:36.756996 kubelet[3178]: E0120 01:42:36.756773 3178 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0e893c04-8b87-47f0-b2a8-981971a5bfc3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:42:36.756996 kubelet[3178]: E0120 01:42:36.756791 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0e893c04-8b87-47f0-b2a8-981971a5bfc3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9899c86f9-mvhh6" podUID="0e893c04-8b87-47f0-b2a8-981971a5bfc3" Jan 20 01:42:36.759021 containerd[1767]: time="2026-01-20T01:42:36.758986157Z" level=error msg="StopPodSandbox for \"54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421\" failed" error="failed to destroy network for sandbox \"54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:42:36.759462 kubelet[3178]: E0120 01:42:36.759114 3178 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" Jan 20 01:42:36.759462 kubelet[3178]: E0120 01:42:36.759216 3178 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421"} Jan 20 01:42:36.759462 kubelet[3178]: E0120 01:42:36.759241 3178 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0e8d57de-35ca-4ff1-828c-b0edcfa72a11\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:42:36.759462 kubelet[3178]: E0120 01:42:36.759258 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0e8d57de-35ca-4ff1-828c-b0edcfa72a11\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d975bd6cf-mbc2x" podUID="0e8d57de-35ca-4ff1-828c-b0edcfa72a11" Jan 20 01:42:39.604952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3695501268.mount: Deactivated successfully. Jan 20 01:42:40.467762 containerd[1767]: time="2026-01-20T01:42:40.467708432Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:40.469617 containerd[1767]: time="2026-01-20T01:42:40.469476792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 20 01:42:40.472782 containerd[1767]: time="2026-01-20T01:42:40.471683991Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:40.475862 containerd[1767]: time="2026-01-20T01:42:40.475236270Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:40.475862 containerd[1767]: time="2026-01-20T01:42:40.475752510Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.811077944s" Jan 20 01:42:40.475862 containerd[1767]: time="2026-01-20T01:42:40.475780030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 20 01:42:40.488202 containerd[1767]: time="2026-01-20T01:42:40.488168947Z" level=info msg="CreateContainer within sandbox \"a0e2730585597377a6ca6c9f9612a147c0ff88d6aa143f62b3cd5bdf7a78cde9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 20 01:42:40.527663 containerd[1767]: time="2026-01-20T01:42:40.527627218Z" level=info msg="CreateContainer within sandbox \"a0e2730585597377a6ca6c9f9612a147c0ff88d6aa143f62b3cd5bdf7a78cde9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"da104cf6abc9d1b34e7bb216db1c65190367537b3501f2e1830ace0a0e2de13a\"" Jan 20 01:42:40.528348 containerd[1767]: time="2026-01-20T01:42:40.528324858Z" level=info msg="StartContainer for \"da104cf6abc9d1b34e7bb216db1c65190367537b3501f2e1830ace0a0e2de13a\"" Jan 20 01:42:40.560043 systemd[1]: Started cri-containerd-da104cf6abc9d1b34e7bb216db1c65190367537b3501f2e1830ace0a0e2de13a.scope - libcontainer container da104cf6abc9d1b34e7bb216db1c65190367537b3501f2e1830ace0a0e2de13a. Jan 20 01:42:40.589344 containerd[1767]: time="2026-01-20T01:42:40.589174164Z" level=info msg="StartContainer for \"da104cf6abc9d1b34e7bb216db1c65190367537b3501f2e1830ace0a0e2de13a\" returns successfully" Jan 20 01:42:40.733867 kubelet[3178]: I0120 01:42:40.733794 3178 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cjl8v" podStartSLOduration=1.045035837 podStartE2EDuration="15.733777971s" podCreationTimestamp="2026-01-20 01:42:25 +0000 UTC" firstStartedPulling="2026-01-20 01:42:25.787838296 +0000 UTC m=+25.321787545" lastFinishedPulling="2026-01-20 01:42:40.47658043 +0000 UTC m=+40.010529679" observedRunningTime="2026-01-20 01:42:40.731558732 +0000 UTC m=+40.265507981" watchObservedRunningTime="2026-01-20 01:42:40.733777971 +0000 UTC m=+40.267727180" Jan 20 01:42:40.973906 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 20 01:42:40.974060 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 20 01:42:41.088052 containerd[1767]: time="2026-01-20T01:42:41.087849011Z" level=info msg="StopPodSandbox for \"ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0\"" Jan 20 01:42:41.221701 containerd[1767]: 2026-01-20 01:42:41.181 [INFO][4480] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" Jan 20 01:42:41.221701 containerd[1767]: 2026-01-20 01:42:41.181 [INFO][4480] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" iface="eth0" netns="/var/run/netns/cni-fcda06fc-e026-27bf-0594-7492e5dd7e0c" Jan 20 01:42:41.221701 containerd[1767]: 2026-01-20 01:42:41.183 [INFO][4480] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" iface="eth0" netns="/var/run/netns/cni-fcda06fc-e026-27bf-0594-7492e5dd7e0c" Jan 20 01:42:41.221701 containerd[1767]: 2026-01-20 01:42:41.184 [INFO][4480] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" iface="eth0" netns="/var/run/netns/cni-fcda06fc-e026-27bf-0594-7492e5dd7e0c" Jan 20 01:42:41.221701 containerd[1767]: 2026-01-20 01:42:41.184 [INFO][4480] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" Jan 20 01:42:41.221701 containerd[1767]: 2026-01-20 01:42:41.184 [INFO][4480] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" Jan 20 01:42:41.221701 containerd[1767]: 2026-01-20 01:42:41.208 [INFO][4489] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" HandleID="k8s-pod-network.ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" Workload="ci--4081.3.6--n--0046389dc1-k8s-whisker--f5fc59cb9--tchkn-eth0" Jan 20 01:42:41.221701 containerd[1767]: 2026-01-20 01:42:41.208 [INFO][4489] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:41.221701 containerd[1767]: 2026-01-20 01:42:41.208 [INFO][4489] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:41.221701 containerd[1767]: 2026-01-20 01:42:41.216 [WARNING][4489] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" HandleID="k8s-pod-network.ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" Workload="ci--4081.3.6--n--0046389dc1-k8s-whisker--f5fc59cb9--tchkn-eth0" Jan 20 01:42:41.221701 containerd[1767]: 2026-01-20 01:42:41.216 [INFO][4489] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" HandleID="k8s-pod-network.ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" Workload="ci--4081.3.6--n--0046389dc1-k8s-whisker--f5fc59cb9--tchkn-eth0" Jan 20 01:42:41.221701 containerd[1767]: 2026-01-20 01:42:41.218 [INFO][4489] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:41.221701 containerd[1767]: 2026-01-20 01:42:41.220 [INFO][4480] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" Jan 20 01:42:41.222429 containerd[1767]: time="2026-01-20T01:42:41.222262420Z" level=info msg="TearDown network for sandbox \"ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0\" successfully" Jan 20 01:42:41.222429 containerd[1767]: time="2026-01-20T01:42:41.222300300Z" level=info msg="StopPodSandbox for \"ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0\" returns successfully" Jan 20 01:42:41.224456 systemd[1]: run-netns-cni\x2dfcda06fc\x2de026\x2d27bf\x2d0594\x2d7492e5dd7e0c.mount: Deactivated successfully. Jan 20 01:42:41.329934 kubelet[3178]: I0120 01:42:41.329527 3178 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2qdf\" (UniqueName: \"kubernetes.io/projected/91ca927e-e136-46b4-98a3-df100d2c639a-kube-api-access-j2qdf\") pod \"91ca927e-e136-46b4-98a3-df100d2c639a\" (UID: \"91ca927e-e136-46b4-98a3-df100d2c639a\") " Jan 20 01:42:41.329934 kubelet[3178]: I0120 01:42:41.329596 3178 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91ca927e-e136-46b4-98a3-df100d2c639a-whisker-ca-bundle\") pod \"91ca927e-e136-46b4-98a3-df100d2c639a\" (UID: \"91ca927e-e136-46b4-98a3-df100d2c639a\") " Jan 20 01:42:41.329934 kubelet[3178]: I0120 01:42:41.329624 3178 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/91ca927e-e136-46b4-98a3-df100d2c639a-whisker-backend-key-pair\") pod \"91ca927e-e136-46b4-98a3-df100d2c639a\" (UID: \"91ca927e-e136-46b4-98a3-df100d2c639a\") " Jan 20 01:42:41.333515 systemd[1]: var-lib-kubelet-pods-91ca927e\x2de136\x2d46b4\x2d98a3\x2ddf100d2c639a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 20 01:42:41.333617 systemd[1]: var-lib-kubelet-pods-91ca927e\x2de136\x2d46b4\x2d98a3\x2ddf100d2c639a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj2qdf.mount: Deactivated successfully. Jan 20 01:42:41.336333 kubelet[3178]: I0120 01:42:41.335492 3178 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91ca927e-e136-46b4-98a3-df100d2c639a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "91ca927e-e136-46b4-98a3-df100d2c639a" (UID: "91ca927e-e136-46b4-98a3-df100d2c639a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 01:42:41.336333 kubelet[3178]: I0120 01:42:41.335592 3178 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91ca927e-e136-46b4-98a3-df100d2c639a-kube-api-access-j2qdf" (OuterVolumeSpecName: "kube-api-access-j2qdf") pod "91ca927e-e136-46b4-98a3-df100d2c639a" (UID: "91ca927e-e136-46b4-98a3-df100d2c639a"). InnerVolumeSpecName "kube-api-access-j2qdf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 01:42:41.337480 kubelet[3178]: I0120 01:42:41.337361 3178 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91ca927e-e136-46b4-98a3-df100d2c639a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "91ca927e-e136-46b4-98a3-df100d2c639a" (UID: "91ca927e-e136-46b4-98a3-df100d2c639a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 01:42:41.430161 kubelet[3178]: I0120 01:42:41.430040 3178 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91ca927e-e136-46b4-98a3-df100d2c639a-whisker-ca-bundle\") on node \"ci-4081.3.6-n-0046389dc1\" DevicePath \"\"" Jan 20 01:42:41.430161 kubelet[3178]: I0120 01:42:41.430069 3178 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/91ca927e-e136-46b4-98a3-df100d2c639a-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-0046389dc1\" DevicePath \"\"" Jan 20 01:42:41.430161 kubelet[3178]: I0120 01:42:41.430079 3178 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j2qdf\" (UniqueName: \"kubernetes.io/projected/91ca927e-e136-46b4-98a3-df100d2c639a-kube-api-access-j2qdf\") on node \"ci-4081.3.6-n-0046389dc1\" DevicePath \"\"" Jan 20 01:42:41.719839 systemd[1]: Removed slice kubepods-besteffort-pod91ca927e_e136_46b4_98a3_df100d2c639a.slice - libcontainer container kubepods-besteffort-pod91ca927e_e136_46b4_98a3_df100d2c639a.slice. Jan 20 01:42:41.802133 systemd[1]: Created slice kubepods-besteffort-pod64bfc6ad_94c7_4fd0_8f5e_dd6a18f5f9f7.slice - libcontainer container kubepods-besteffort-pod64bfc6ad_94c7_4fd0_8f5e_dd6a18f5f9f7.slice. Jan 20 01:42:41.935210 kubelet[3178]: I0120 01:42:41.935057 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwp7r\" (UniqueName: \"kubernetes.io/projected/64bfc6ad-94c7-4fd0-8f5e-dd6a18f5f9f7-kube-api-access-dwp7r\") pod \"whisker-bc8ccf7c-bn7qp\" (UID: \"64bfc6ad-94c7-4fd0-8f5e-dd6a18f5f9f7\") " pod="calico-system/whisker-bc8ccf7c-bn7qp" Jan 20 01:42:41.935210 kubelet[3178]: I0120 01:42:41.935131 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64bfc6ad-94c7-4fd0-8f5e-dd6a18f5f9f7-whisker-ca-bundle\") pod \"whisker-bc8ccf7c-bn7qp\" (UID: \"64bfc6ad-94c7-4fd0-8f5e-dd6a18f5f9f7\") " pod="calico-system/whisker-bc8ccf7c-bn7qp" Jan 20 01:42:41.935210 kubelet[3178]: I0120 01:42:41.935154 3178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/64bfc6ad-94c7-4fd0-8f5e-dd6a18f5f9f7-whisker-backend-key-pair\") pod \"whisker-bc8ccf7c-bn7qp\" (UID: \"64bfc6ad-94c7-4fd0-8f5e-dd6a18f5f9f7\") " pod="calico-system/whisker-bc8ccf7c-bn7qp" Jan 20 01:42:42.105991 containerd[1767]: time="2026-01-20T01:42:42.105948179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bc8ccf7c-bn7qp,Uid:64bfc6ad-94c7-4fd0-8f5e-dd6a18f5f9f7,Namespace:calico-system,Attempt:0,}" Jan 20 01:42:42.272104 systemd-networkd[1367]: calife70dd6b2a7: Link UP Jan 20 01:42:42.272788 systemd-networkd[1367]: calife70dd6b2a7: Gained carrier Jan 20 01:42:42.290982 containerd[1767]: 2026-01-20 01:42:42.154 [INFO][4511] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:42:42.290982 containerd[1767]: 2026-01-20 01:42:42.166 [INFO][4511] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--0046389dc1-k8s-whisker--bc8ccf7c--bn7qp-eth0 whisker-bc8ccf7c- calico-system 64bfc6ad-94c7-4fd0-8f5e-dd6a18f5f9f7 908 0 2026-01-20 01:42:41 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:bc8ccf7c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-0046389dc1 whisker-bc8ccf7c-bn7qp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calife70dd6b2a7 [] [] }} ContainerID="344a45ac4f362b6703bd6b8e1f9313ece480ebdb523a7e78516b9a9de0e32ec6" Namespace="calico-system" Pod="whisker-bc8ccf7c-bn7qp" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-whisker--bc8ccf7c--bn7qp-" Jan 20 01:42:42.290982 containerd[1767]: 2026-01-20 01:42:42.166 [INFO][4511] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="344a45ac4f362b6703bd6b8e1f9313ece480ebdb523a7e78516b9a9de0e32ec6" Namespace="calico-system" Pod="whisker-bc8ccf7c-bn7qp" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-whisker--bc8ccf7c--bn7qp-eth0" Jan 20 01:42:42.290982 containerd[1767]: 2026-01-20 01:42:42.187 [INFO][4523] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="344a45ac4f362b6703bd6b8e1f9313ece480ebdb523a7e78516b9a9de0e32ec6" HandleID="k8s-pod-network.344a45ac4f362b6703bd6b8e1f9313ece480ebdb523a7e78516b9a9de0e32ec6" Workload="ci--4081.3.6--n--0046389dc1-k8s-whisker--bc8ccf7c--bn7qp-eth0" Jan 20 01:42:42.290982 containerd[1767]: 2026-01-20 01:42:42.187 [INFO][4523] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="344a45ac4f362b6703bd6b8e1f9313ece480ebdb523a7e78516b9a9de0e32ec6" HandleID="k8s-pod-network.344a45ac4f362b6703bd6b8e1f9313ece480ebdb523a7e78516b9a9de0e32ec6" Workload="ci--4081.3.6--n--0046389dc1-k8s-whisker--bc8ccf7c--bn7qp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000330520), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-0046389dc1", "pod":"whisker-bc8ccf7c-bn7qp", "timestamp":"2026-01-20 01:42:42.187659 +0000 UTC"}, Hostname:"ci-4081.3.6-n-0046389dc1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:42:42.290982 containerd[1767]: 2026-01-20 01:42:42.187 [INFO][4523] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:42.290982 containerd[1767]: 2026-01-20 01:42:42.187 [INFO][4523] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:42.290982 containerd[1767]: 2026-01-20 01:42:42.187 [INFO][4523] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-0046389dc1' Jan 20 01:42:42.290982 containerd[1767]: 2026-01-20 01:42:42.196 [INFO][4523] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.344a45ac4f362b6703bd6b8e1f9313ece480ebdb523a7e78516b9a9de0e32ec6" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:42.290982 containerd[1767]: 2026-01-20 01:42:42.199 [INFO][4523] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:42.290982 containerd[1767]: 2026-01-20 01:42:42.202 [INFO][4523] ipam/ipam.go 511: Trying affinity for 192.168.28.128/26 host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:42.290982 containerd[1767]: 2026-01-20 01:42:42.204 [INFO][4523] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.128/26 host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:42.290982 containerd[1767]: 2026-01-20 01:42:42.205 [INFO][4523] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.128/26 host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:42.290982 containerd[1767]: 2026-01-20 01:42:42.205 [INFO][4523] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.28.128/26 handle="k8s-pod-network.344a45ac4f362b6703bd6b8e1f9313ece480ebdb523a7e78516b9a9de0e32ec6" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:42.290982 containerd[1767]: 2026-01-20 01:42:42.206 [INFO][4523] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.344a45ac4f362b6703bd6b8e1f9313ece480ebdb523a7e78516b9a9de0e32ec6 Jan 20 01:42:42.290982 containerd[1767]: 2026-01-20 01:42:42.213 [INFO][4523] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.28.128/26 handle="k8s-pod-network.344a45ac4f362b6703bd6b8e1f9313ece480ebdb523a7e78516b9a9de0e32ec6" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:42.290982 containerd[1767]: 2026-01-20 01:42:42.232 [INFO][4523] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.28.129/26] block=192.168.28.128/26 handle="k8s-pod-network.344a45ac4f362b6703bd6b8e1f9313ece480ebdb523a7e78516b9a9de0e32ec6" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:42.290982 containerd[1767]: 2026-01-20 01:42:42.232 [INFO][4523] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.129/26] handle="k8s-pod-network.344a45ac4f362b6703bd6b8e1f9313ece480ebdb523a7e78516b9a9de0e32ec6" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:42.290982 containerd[1767]: 2026-01-20 01:42:42.232 [INFO][4523] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:42.290982 containerd[1767]: 2026-01-20 01:42:42.232 [INFO][4523] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.28.129/26] IPv6=[] ContainerID="344a45ac4f362b6703bd6b8e1f9313ece480ebdb523a7e78516b9a9de0e32ec6" HandleID="k8s-pod-network.344a45ac4f362b6703bd6b8e1f9313ece480ebdb523a7e78516b9a9de0e32ec6" Workload="ci--4081.3.6--n--0046389dc1-k8s-whisker--bc8ccf7c--bn7qp-eth0" Jan 20 01:42:42.291710 containerd[1767]: 2026-01-20 01:42:42.234 [INFO][4511] cni-plugin/k8s.go 418: Populated endpoint ContainerID="344a45ac4f362b6703bd6b8e1f9313ece480ebdb523a7e78516b9a9de0e32ec6" Namespace="calico-system" Pod="whisker-bc8ccf7c-bn7qp" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-whisker--bc8ccf7c--bn7qp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0046389dc1-k8s-whisker--bc8ccf7c--bn7qp-eth0", GenerateName:"whisker-bc8ccf7c-", Namespace:"calico-system", SelfLink:"", UID:"64bfc6ad-94c7-4fd0-8f5e-dd6a18f5f9f7", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"bc8ccf7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0046389dc1", ContainerID:"", Pod:"whisker-bc8ccf7c-bn7qp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.28.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calife70dd6b2a7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:42.291710 containerd[1767]: 2026-01-20 01:42:42.234 [INFO][4511] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.129/32] ContainerID="344a45ac4f362b6703bd6b8e1f9313ece480ebdb523a7e78516b9a9de0e32ec6" Namespace="calico-system" Pod="whisker-bc8ccf7c-bn7qp" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-whisker--bc8ccf7c--bn7qp-eth0" Jan 20 01:42:42.291710 containerd[1767]: 2026-01-20 01:42:42.234 [INFO][4511] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calife70dd6b2a7 ContainerID="344a45ac4f362b6703bd6b8e1f9313ece480ebdb523a7e78516b9a9de0e32ec6" Namespace="calico-system" Pod="whisker-bc8ccf7c-bn7qp" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-whisker--bc8ccf7c--bn7qp-eth0" Jan 20 01:42:42.291710 containerd[1767]: 2026-01-20 01:42:42.273 [INFO][4511] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="344a45ac4f362b6703bd6b8e1f9313ece480ebdb523a7e78516b9a9de0e32ec6" Namespace="calico-system" Pod="whisker-bc8ccf7c-bn7qp" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-whisker--bc8ccf7c--bn7qp-eth0" Jan 20 01:42:42.291710 containerd[1767]: 2026-01-20 01:42:42.273 [INFO][4511] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="344a45ac4f362b6703bd6b8e1f9313ece480ebdb523a7e78516b9a9de0e32ec6" Namespace="calico-system" Pod="whisker-bc8ccf7c-bn7qp" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-whisker--bc8ccf7c--bn7qp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0046389dc1-k8s-whisker--bc8ccf7c--bn7qp-eth0", GenerateName:"whisker-bc8ccf7c-", Namespace:"calico-system", SelfLink:"", UID:"64bfc6ad-94c7-4fd0-8f5e-dd6a18f5f9f7", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"bc8ccf7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0046389dc1", ContainerID:"344a45ac4f362b6703bd6b8e1f9313ece480ebdb523a7e78516b9a9de0e32ec6", Pod:"whisker-bc8ccf7c-bn7qp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.28.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calife70dd6b2a7", MAC:"96:ea:31:96:47:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:42.291710 containerd[1767]: 2026-01-20 01:42:42.288 [INFO][4511] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="344a45ac4f362b6703bd6b8e1f9313ece480ebdb523a7e78516b9a9de0e32ec6" Namespace="calico-system" Pod="whisker-bc8ccf7c-bn7qp" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-whisker--bc8ccf7c--bn7qp-eth0" Jan 20 01:42:42.306841 containerd[1767]: time="2026-01-20T01:42:42.306763013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:42:42.306958 containerd[1767]: time="2026-01-20T01:42:42.306856733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:42:42.306958 containerd[1767]: time="2026-01-20T01:42:42.306885493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:42.307066 containerd[1767]: time="2026-01-20T01:42:42.307000853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:42.329032 systemd[1]: Started cri-containerd-344a45ac4f362b6703bd6b8e1f9313ece480ebdb523a7e78516b9a9de0e32ec6.scope - libcontainer container 344a45ac4f362b6703bd6b8e1f9313ece480ebdb523a7e78516b9a9de0e32ec6. Jan 20 01:42:42.357665 containerd[1767]: time="2026-01-20T01:42:42.357567641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bc8ccf7c-bn7qp,Uid:64bfc6ad-94c7-4fd0-8f5e-dd6a18f5f9f7,Namespace:calico-system,Attempt:0,} returns sandbox id \"344a45ac4f362b6703bd6b8e1f9313ece480ebdb523a7e78516b9a9de0e32ec6\"" Jan 20 01:42:42.360520 containerd[1767]: time="2026-01-20T01:42:42.360489761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 01:42:42.571606 kubelet[3178]: I0120 01:42:42.571342 3178 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91ca927e-e136-46b4-98a3-df100d2c639a" path="/var/lib/kubelet/pods/91ca927e-e136-46b4-98a3-df100d2c639a/volumes" Jan 20 01:42:42.632024 containerd[1767]: time="2026-01-20T01:42:42.631918345Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:42:42.634351 containerd[1767]: time="2026-01-20T01:42:42.634287665Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 01:42:42.634351 containerd[1767]: time="2026-01-20T01:42:42.634325505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 01:42:42.634971 kubelet[3178]: E0120 01:42:42.634930 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:42:42.635055 kubelet[3178]: E0120 01:42:42.635020 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:42:42.639971 kubelet[3178]: E0120 01:42:42.639927 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:496be695959a4924850b56723f0d0926,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dwp7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-bc8ccf7c-bn7qp_calico-system(64bfc6ad-94c7-4fd0-8f5e-dd6a18f5f9f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 01:42:42.642273 containerd[1767]: time="2026-01-20T01:42:42.642242345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 01:42:42.898972 containerd[1767]: time="2026-01-20T01:42:42.898593817Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:42:42.902029 containerd[1767]: time="2026-01-20T01:42:42.901949137Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 01:42:42.902029 containerd[1767]: time="2026-01-20T01:42:42.901994737Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 01:42:42.902252 kubelet[3178]: E0120 01:42:42.902215 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:42:42.902324 kubelet[3178]: E0120 01:42:42.902262 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:42:42.902412 kubelet[3178]: E0120 01:42:42.902366 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dwp7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-bc8ccf7c-bn7qp_calico-system(64bfc6ad-94c7-4fd0-8f5e-dd6a18f5f9f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 01:42:42.903697 kubelet[3178]: E0120 01:42:42.903658 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bc8ccf7c-bn7qp" podUID="64bfc6ad-94c7-4fd0-8f5e-dd6a18f5f9f7" Jan 20 01:42:43.720351 kubelet[3178]: E0120 01:42:43.720098 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bc8ccf7c-bn7qp" podUID="64bfc6ad-94c7-4fd0-8f5e-dd6a18f5f9f7" Jan 20 01:42:43.965030 systemd-networkd[1367]: calife70dd6b2a7: Gained IPv6LL Jan 20 01:42:46.569964 containerd[1767]: time="2026-01-20T01:42:46.569860667Z" level=info msg="StopPodSandbox for \"216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b\"" Jan 20 01:42:46.571766 containerd[1767]: time="2026-01-20T01:42:46.570737387Z" level=info msg="StopPodSandbox for \"afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504\"" Jan 20 01:42:46.672204 containerd[1767]: 2026-01-20 01:42:46.619 [INFO][4752] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" Jan 20 01:42:46.672204 containerd[1767]: 2026-01-20 01:42:46.620 [INFO][4752] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" iface="eth0" netns="/var/run/netns/cni-56381f76-2cbc-9767-8d13-39719797b9e0" Jan 20 01:42:46.672204 containerd[1767]: 2026-01-20 01:42:46.622 [INFO][4752] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" iface="eth0" netns="/var/run/netns/cni-56381f76-2cbc-9767-8d13-39719797b9e0" Jan 20 01:42:46.672204 containerd[1767]: 2026-01-20 01:42:46.622 [INFO][4752] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" iface="eth0" netns="/var/run/netns/cni-56381f76-2cbc-9767-8d13-39719797b9e0" Jan 20 01:42:46.672204 containerd[1767]: 2026-01-20 01:42:46.622 [INFO][4752] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" Jan 20 01:42:46.672204 containerd[1767]: 2026-01-20 01:42:46.622 [INFO][4752] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" Jan 20 01:42:46.672204 containerd[1767]: 2026-01-20 01:42:46.656 [INFO][4766] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" HandleID="k8s-pod-network.afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" Workload="ci--4081.3.6--n--0046389dc1-k8s-goldmane--666569f655--bwmf9-eth0" Jan 20 01:42:46.672204 containerd[1767]: 2026-01-20 01:42:46.656 [INFO][4766] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:46.672204 containerd[1767]: 2026-01-20 01:42:46.656 [INFO][4766] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:46.672204 containerd[1767]: 2026-01-20 01:42:46.664 [WARNING][4766] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" HandleID="k8s-pod-network.afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" Workload="ci--4081.3.6--n--0046389dc1-k8s-goldmane--666569f655--bwmf9-eth0" Jan 20 01:42:46.672204 containerd[1767]: 2026-01-20 01:42:46.665 [INFO][4766] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" HandleID="k8s-pod-network.afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" Workload="ci--4081.3.6--n--0046389dc1-k8s-goldmane--666569f655--bwmf9-eth0" Jan 20 01:42:46.672204 containerd[1767]: 2026-01-20 01:42:46.666 [INFO][4766] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:46.672204 containerd[1767]: 2026-01-20 01:42:46.671 [INFO][4752] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" Jan 20 01:42:46.675169 containerd[1767]: time="2026-01-20T01:42:46.672341544Z" level=info msg="TearDown network for sandbox \"afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504\" successfully" Jan 20 01:42:46.675169 containerd[1767]: time="2026-01-20T01:42:46.672369664Z" level=info msg="StopPodSandbox for \"afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504\" returns successfully" Jan 20 01:42:46.675169 containerd[1767]: time="2026-01-20T01:42:46.672949104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bwmf9,Uid:8a0da74e-5da6-4d17-baef-898cc44d92e7,Namespace:calico-system,Attempt:1,}" Jan 20 01:42:46.674618 systemd[1]: run-netns-cni\x2d56381f76\x2d2cbc\x2d9767\x2d8d13\x2d39719797b9e0.mount: Deactivated successfully. Jan 20 01:42:46.686757 containerd[1767]: 2026-01-20 01:42:46.625 [INFO][4753] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" Jan 20 01:42:46.686757 containerd[1767]: 2026-01-20 01:42:46.625 [INFO][4753] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" iface="eth0" netns="/var/run/netns/cni-0de72fcb-ee5c-54a3-d7b3-4479938eaf0a" Jan 20 01:42:46.686757 containerd[1767]: 2026-01-20 01:42:46.626 [INFO][4753] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" iface="eth0" netns="/var/run/netns/cni-0de72fcb-ee5c-54a3-d7b3-4479938eaf0a" Jan 20 01:42:46.686757 containerd[1767]: 2026-01-20 01:42:46.627 [INFO][4753] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" iface="eth0" netns="/var/run/netns/cni-0de72fcb-ee5c-54a3-d7b3-4479938eaf0a" Jan 20 01:42:46.686757 containerd[1767]: 2026-01-20 01:42:46.627 [INFO][4753] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" Jan 20 01:42:46.686757 containerd[1767]: 2026-01-20 01:42:46.627 [INFO][4753] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" Jan 20 01:42:46.686757 containerd[1767]: 2026-01-20 01:42:46.658 [INFO][4772] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" HandleID="k8s-pod-network.216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" Workload="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--jj6z8-eth0" Jan 20 01:42:46.686757 containerd[1767]: 2026-01-20 01:42:46.658 [INFO][4772] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:46.686757 containerd[1767]: 2026-01-20 01:42:46.666 [INFO][4772] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:46.686757 containerd[1767]: 2026-01-20 01:42:46.680 [WARNING][4772] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" HandleID="k8s-pod-network.216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" Workload="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--jj6z8-eth0" Jan 20 01:42:46.686757 containerd[1767]: 2026-01-20 01:42:46.680 [INFO][4772] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" HandleID="k8s-pod-network.216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" Workload="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--jj6z8-eth0" Jan 20 01:42:46.686757 containerd[1767]: 2026-01-20 01:42:46.682 [INFO][4772] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:46.686757 containerd[1767]: 2026-01-20 01:42:46.684 [INFO][4753] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" Jan 20 01:42:46.689628 containerd[1767]: time="2026-01-20T01:42:46.687205543Z" level=info msg="TearDown network for sandbox \"216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b\" successfully" Jan 20 01:42:46.689628 containerd[1767]: time="2026-01-20T01:42:46.687231303Z" level=info msg="StopPodSandbox for \"216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b\" returns successfully" Jan 20 01:42:46.689628 containerd[1767]: time="2026-01-20T01:42:46.689319743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jj6z8,Uid:9033ed8a-4ce4-4c81-8671-cf1d75ad0bd7,Namespace:kube-system,Attempt:1,}" Jan 20 01:42:46.689253 systemd[1]: run-netns-cni\x2d0de72fcb\x2dee5c\x2d54a3\x2dd7b3\x2d4479938eaf0a.mount: Deactivated successfully. Jan 20 01:42:46.845441 systemd-networkd[1367]: cali228f47d438a: Link UP Jan 20 01:42:46.847040 systemd-networkd[1367]: cali228f47d438a: Gained carrier Jan 20 01:42:46.865214 containerd[1767]: 2026-01-20 01:42:46.761 [INFO][4790] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:42:46.865214 containerd[1767]: 2026-01-20 01:42:46.775 [INFO][4790] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--jj6z8-eth0 coredns-668d6bf9bc- kube-system 9033ed8a-4ce4-4c81-8671-cf1d75ad0bd7 936 0 2026-01-20 01:42:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-0046389dc1 coredns-668d6bf9bc-jj6z8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali228f47d438a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6353ea97c3906a3c047e5d648c643bab9d2ce52787b1daa390eeb05de60739d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-jj6z8" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--jj6z8-" Jan 20 01:42:46.865214 containerd[1767]: 2026-01-20 01:42:46.775 [INFO][4790] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6353ea97c3906a3c047e5d648c643bab9d2ce52787b1daa390eeb05de60739d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-jj6z8" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--jj6z8-eth0" Jan 20 01:42:46.865214 containerd[1767]: 2026-01-20 01:42:46.804 [INFO][4807] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6353ea97c3906a3c047e5d648c643bab9d2ce52787b1daa390eeb05de60739d9" HandleID="k8s-pod-network.6353ea97c3906a3c047e5d648c643bab9d2ce52787b1daa390eeb05de60739d9" Workload="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--jj6z8-eth0" Jan 20 01:42:46.865214 containerd[1767]: 2026-01-20 01:42:46.804 [INFO][4807] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6353ea97c3906a3c047e5d648c643bab9d2ce52787b1daa390eeb05de60739d9" HandleID="k8s-pod-network.6353ea97c3906a3c047e5d648c643bab9d2ce52787b1daa390eeb05de60739d9" Workload="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--jj6z8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c1660), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-0046389dc1", "pod":"coredns-668d6bf9bc-jj6z8", "timestamp":"2026-01-20 01:42:46.80482474 +0000 UTC"}, Hostname:"ci-4081.3.6-n-0046389dc1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:42:46.865214 containerd[1767]: 2026-01-20 01:42:46.805 [INFO][4807] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:46.865214 containerd[1767]: 2026-01-20 01:42:46.805 [INFO][4807] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:46.865214 containerd[1767]: 2026-01-20 01:42:46.805 [INFO][4807] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-0046389dc1' Jan 20 01:42:46.865214 containerd[1767]: 2026-01-20 01:42:46.813 [INFO][4807] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6353ea97c3906a3c047e5d648c643bab9d2ce52787b1daa390eeb05de60739d9" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:46.865214 containerd[1767]: 2026-01-20 01:42:46.816 [INFO][4807] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:46.865214 containerd[1767]: 2026-01-20 01:42:46.819 [INFO][4807] ipam/ipam.go 511: Trying affinity for 192.168.28.128/26 host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:46.865214 containerd[1767]: 2026-01-20 01:42:46.821 [INFO][4807] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.128/26 host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:46.865214 containerd[1767]: 2026-01-20 01:42:46.822 [INFO][4807] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.128/26 host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:46.865214 containerd[1767]: 2026-01-20 01:42:46.822 [INFO][4807] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.28.128/26 handle="k8s-pod-network.6353ea97c3906a3c047e5d648c643bab9d2ce52787b1daa390eeb05de60739d9" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:46.865214 containerd[1767]: 2026-01-20 01:42:46.823 [INFO][4807] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6353ea97c3906a3c047e5d648c643bab9d2ce52787b1daa390eeb05de60739d9 Jan 20 01:42:46.865214 containerd[1767]: 2026-01-20 01:42:46.830 [INFO][4807] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.28.128/26 handle="k8s-pod-network.6353ea97c3906a3c047e5d648c643bab9d2ce52787b1daa390eeb05de60739d9" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:46.865214 containerd[1767]: 2026-01-20 01:42:46.839 [INFO][4807] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.28.130/26] block=192.168.28.128/26 handle="k8s-pod-network.6353ea97c3906a3c047e5d648c643bab9d2ce52787b1daa390eeb05de60739d9" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:46.865214 containerd[1767]: 2026-01-20 01:42:46.839 [INFO][4807] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.130/26] handle="k8s-pod-network.6353ea97c3906a3c047e5d648c643bab9d2ce52787b1daa390eeb05de60739d9" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:46.865214 containerd[1767]: 2026-01-20 01:42:46.839 [INFO][4807] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:46.865214 containerd[1767]: 2026-01-20 01:42:46.839 [INFO][4807] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.28.130/26] IPv6=[] ContainerID="6353ea97c3906a3c047e5d648c643bab9d2ce52787b1daa390eeb05de60739d9" HandleID="k8s-pod-network.6353ea97c3906a3c047e5d648c643bab9d2ce52787b1daa390eeb05de60739d9" Workload="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--jj6z8-eth0" Jan 20 01:42:46.865786 containerd[1767]: 2026-01-20 01:42:46.841 [INFO][4790] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6353ea97c3906a3c047e5d648c643bab9d2ce52787b1daa390eeb05de60739d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-jj6z8" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--jj6z8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--jj6z8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9033ed8a-4ce4-4c81-8671-cf1d75ad0bd7", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0046389dc1", ContainerID:"", Pod:"coredns-668d6bf9bc-jj6z8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali228f47d438a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:46.865786 containerd[1767]: 2026-01-20 01:42:46.841 [INFO][4790] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.130/32] ContainerID="6353ea97c3906a3c047e5d648c643bab9d2ce52787b1daa390eeb05de60739d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-jj6z8" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--jj6z8-eth0" Jan 20 01:42:46.865786 containerd[1767]: 2026-01-20 01:42:46.841 [INFO][4790] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali228f47d438a ContainerID="6353ea97c3906a3c047e5d648c643bab9d2ce52787b1daa390eeb05de60739d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-jj6z8" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--jj6z8-eth0" Jan 20 01:42:46.865786 containerd[1767]: 2026-01-20 01:42:46.847 [INFO][4790] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6353ea97c3906a3c047e5d648c643bab9d2ce52787b1daa390eeb05de60739d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-jj6z8" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--jj6z8-eth0" Jan 20 01:42:46.865786 containerd[1767]: 2026-01-20 01:42:46.848 [INFO][4790] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6353ea97c3906a3c047e5d648c643bab9d2ce52787b1daa390eeb05de60739d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-jj6z8" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--jj6z8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--jj6z8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9033ed8a-4ce4-4c81-8671-cf1d75ad0bd7", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0046389dc1", ContainerID:"6353ea97c3906a3c047e5d648c643bab9d2ce52787b1daa390eeb05de60739d9", Pod:"coredns-668d6bf9bc-jj6z8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali228f47d438a", MAC:"3a:4f:4b:67:ca:70", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:46.865786 containerd[1767]: 2026-01-20 01:42:46.863 [INFO][4790] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6353ea97c3906a3c047e5d648c643bab9d2ce52787b1daa390eeb05de60739d9" Namespace="kube-system" Pod="coredns-668d6bf9bc-jj6z8" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--jj6z8-eth0" Jan 20 01:42:46.885977 containerd[1767]: time="2026-01-20T01:42:46.883892457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:42:46.885977 containerd[1767]: time="2026-01-20T01:42:46.885208577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:42:46.885977 containerd[1767]: time="2026-01-20T01:42:46.885219417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:46.885977 containerd[1767]: time="2026-01-20T01:42:46.885294057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:46.898431 systemd[1]: Started cri-containerd-6353ea97c3906a3c047e5d648c643bab9d2ce52787b1daa390eeb05de60739d9.scope - libcontainer container 6353ea97c3906a3c047e5d648c643bab9d2ce52787b1daa390eeb05de60739d9. Jan 20 01:42:46.939321 containerd[1767]: time="2026-01-20T01:42:46.939220216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jj6z8,Uid:9033ed8a-4ce4-4c81-8671-cf1d75ad0bd7,Namespace:kube-system,Attempt:1,} returns sandbox id \"6353ea97c3906a3c047e5d648c643bab9d2ce52787b1daa390eeb05de60739d9\"" Jan 20 01:42:46.946586 containerd[1767]: time="2026-01-20T01:42:46.946425016Z" level=info msg="CreateContainer within sandbox \"6353ea97c3906a3c047e5d648c643bab9d2ce52787b1daa390eeb05de60739d9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 01:42:46.970059 systemd-networkd[1367]: cali22174637fbe: Link UP Jan 20 01:42:46.970242 systemd-networkd[1367]: cali22174637fbe: Gained carrier Jan 20 01:42:46.982892 containerd[1767]: time="2026-01-20T01:42:46.981682414Z" level=info msg="CreateContainer within sandbox \"6353ea97c3906a3c047e5d648c643bab9d2ce52787b1daa390eeb05de60739d9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3baac17586f189ebba3fb40b1fb0cd9dc5c1d2e974f6c1398fa09d112b01ab9f\"" Jan 20 01:42:46.982892 containerd[1767]: time="2026-01-20T01:42:46.982601374Z" level=info msg="StartContainer for \"3baac17586f189ebba3fb40b1fb0cd9dc5c1d2e974f6c1398fa09d112b01ab9f\"" Jan 20 01:42:46.993260 containerd[1767]: 2026-01-20 01:42:46.753 [INFO][4780] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:42:46.993260 containerd[1767]: 2026-01-20 01:42:46.770 [INFO][4780] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--0046389dc1-k8s-goldmane--666569f655--bwmf9-eth0 goldmane-666569f655- calico-system 8a0da74e-5da6-4d17-baef-898cc44d92e7 935 0 2026-01-20 01:42:23 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-0046389dc1 goldmane-666569f655-bwmf9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali22174637fbe [] [] }} ContainerID="6efab69d4106f47057bb042750913f99919af88e074c28df4bd98a7f101b8444" Namespace="calico-system" Pod="goldmane-666569f655-bwmf9" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-goldmane--666569f655--bwmf9-" Jan 20 01:42:46.993260 containerd[1767]: 2026-01-20 01:42:46.770 [INFO][4780] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6efab69d4106f47057bb042750913f99919af88e074c28df4bd98a7f101b8444" Namespace="calico-system" Pod="goldmane-666569f655-bwmf9" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-goldmane--666569f655--bwmf9-eth0" Jan 20 01:42:46.993260 containerd[1767]: 2026-01-20 01:42:46.806 [INFO][4805] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6efab69d4106f47057bb042750913f99919af88e074c28df4bd98a7f101b8444" HandleID="k8s-pod-network.6efab69d4106f47057bb042750913f99919af88e074c28df4bd98a7f101b8444" Workload="ci--4081.3.6--n--0046389dc1-k8s-goldmane--666569f655--bwmf9-eth0" Jan 20 01:42:46.993260 containerd[1767]: 2026-01-20 01:42:46.806 [INFO][4805] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6efab69d4106f47057bb042750913f99919af88e074c28df4bd98a7f101b8444" HandleID="k8s-pod-network.6efab69d4106f47057bb042750913f99919af88e074c28df4bd98a7f101b8444" Workload="ci--4081.3.6--n--0046389dc1-k8s-goldmane--666569f655--bwmf9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-0046389dc1", "pod":"goldmane-666569f655-bwmf9", "timestamp":"2026-01-20 01:42:46.80604514 +0000 UTC"}, Hostname:"ci-4081.3.6-n-0046389dc1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:42:46.993260 containerd[1767]: 2026-01-20 01:42:46.806 [INFO][4805] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:46.993260 containerd[1767]: 2026-01-20 01:42:46.839 [INFO][4805] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:46.993260 containerd[1767]: 2026-01-20 01:42:46.839 [INFO][4805] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-0046389dc1' Jan 20 01:42:46.993260 containerd[1767]: 2026-01-20 01:42:46.914 [INFO][4805] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6efab69d4106f47057bb042750913f99919af88e074c28df4bd98a7f101b8444" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:46.993260 containerd[1767]: 2026-01-20 01:42:46.931 [INFO][4805] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:46.993260 containerd[1767]: 2026-01-20 01:42:46.936 [INFO][4805] ipam/ipam.go 511: Trying affinity for 192.168.28.128/26 host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:46.993260 containerd[1767]: 2026-01-20 01:42:46.937 [INFO][4805] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.128/26 host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:46.993260 containerd[1767]: 2026-01-20 01:42:46.940 [INFO][4805] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.128/26 host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:46.993260 containerd[1767]: 2026-01-20 01:42:46.940 [INFO][4805] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.28.128/26 handle="k8s-pod-network.6efab69d4106f47057bb042750913f99919af88e074c28df4bd98a7f101b8444" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:46.993260 containerd[1767]: 2026-01-20 01:42:46.941 [INFO][4805] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6efab69d4106f47057bb042750913f99919af88e074c28df4bd98a7f101b8444 Jan 20 01:42:46.993260 containerd[1767]: 2026-01-20 01:42:46.951 [INFO][4805] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.28.128/26 handle="k8s-pod-network.6efab69d4106f47057bb042750913f99919af88e074c28df4bd98a7f101b8444" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:46.993260 containerd[1767]: 2026-01-20 01:42:46.962 [INFO][4805] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.28.131/26] block=192.168.28.128/26 handle="k8s-pod-network.6efab69d4106f47057bb042750913f99919af88e074c28df4bd98a7f101b8444" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:46.993260 containerd[1767]: 2026-01-20 01:42:46.962 [INFO][4805] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.131/26] handle="k8s-pod-network.6efab69d4106f47057bb042750913f99919af88e074c28df4bd98a7f101b8444" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:46.993260 containerd[1767]: 2026-01-20 01:42:46.962 [INFO][4805] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:46.993260 containerd[1767]: 2026-01-20 01:42:46.962 [INFO][4805] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.28.131/26] IPv6=[] ContainerID="6efab69d4106f47057bb042750913f99919af88e074c28df4bd98a7f101b8444" HandleID="k8s-pod-network.6efab69d4106f47057bb042750913f99919af88e074c28df4bd98a7f101b8444" Workload="ci--4081.3.6--n--0046389dc1-k8s-goldmane--666569f655--bwmf9-eth0" Jan 20 01:42:46.994018 containerd[1767]: 2026-01-20 01:42:46.964 [INFO][4780] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6efab69d4106f47057bb042750913f99919af88e074c28df4bd98a7f101b8444" Namespace="calico-system" Pod="goldmane-666569f655-bwmf9" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-goldmane--666569f655--bwmf9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0046389dc1-k8s-goldmane--666569f655--bwmf9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"8a0da74e-5da6-4d17-baef-898cc44d92e7", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0046389dc1", ContainerID:"", Pod:"goldmane-666569f655-bwmf9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.28.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali22174637fbe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:46.994018 containerd[1767]: 2026-01-20 01:42:46.965 [INFO][4780] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.131/32] ContainerID="6efab69d4106f47057bb042750913f99919af88e074c28df4bd98a7f101b8444" Namespace="calico-system" Pod="goldmane-666569f655-bwmf9" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-goldmane--666569f655--bwmf9-eth0" Jan 20 01:42:46.994018 containerd[1767]: 2026-01-20 01:42:46.965 [INFO][4780] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22174637fbe ContainerID="6efab69d4106f47057bb042750913f99919af88e074c28df4bd98a7f101b8444" Namespace="calico-system" Pod="goldmane-666569f655-bwmf9" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-goldmane--666569f655--bwmf9-eth0" Jan 20 01:42:46.994018 containerd[1767]: 2026-01-20 01:42:46.970 [INFO][4780] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6efab69d4106f47057bb042750913f99919af88e074c28df4bd98a7f101b8444" Namespace="calico-system" Pod="goldmane-666569f655-bwmf9" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-goldmane--666569f655--bwmf9-eth0" Jan 20 01:42:46.994018 containerd[1767]: 2026-01-20 01:42:46.971 [INFO][4780] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6efab69d4106f47057bb042750913f99919af88e074c28df4bd98a7f101b8444" Namespace="calico-system" Pod="goldmane-666569f655-bwmf9" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-goldmane--666569f655--bwmf9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0046389dc1-k8s-goldmane--666569f655--bwmf9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"8a0da74e-5da6-4d17-baef-898cc44d92e7", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0046389dc1", ContainerID:"6efab69d4106f47057bb042750913f99919af88e074c28df4bd98a7f101b8444", Pod:"goldmane-666569f655-bwmf9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.28.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali22174637fbe", MAC:"de:05:e3:f5:eb:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:46.994018 containerd[1767]: 2026-01-20 01:42:46.990 [INFO][4780] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6efab69d4106f47057bb042750913f99919af88e074c28df4bd98a7f101b8444" Namespace="calico-system" Pod="goldmane-666569f655-bwmf9" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-goldmane--666569f655--bwmf9-eth0" Jan 20 01:42:47.012071 systemd[1]: Started cri-containerd-3baac17586f189ebba3fb40b1fb0cd9dc5c1d2e974f6c1398fa09d112b01ab9f.scope - libcontainer container 3baac17586f189ebba3fb40b1fb0cd9dc5c1d2e974f6c1398fa09d112b01ab9f. Jan 20 01:42:47.022774 containerd[1767]: time="2026-01-20T01:42:47.022593373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:42:47.022774 containerd[1767]: time="2026-01-20T01:42:47.022649573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:42:47.022774 containerd[1767]: time="2026-01-20T01:42:47.022664773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:47.022774 containerd[1767]: time="2026-01-20T01:42:47.022736693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:47.044083 systemd[1]: Started cri-containerd-6efab69d4106f47057bb042750913f99919af88e074c28df4bd98a7f101b8444.scope - libcontainer container 6efab69d4106f47057bb042750913f99919af88e074c28df4bd98a7f101b8444. Jan 20 01:42:47.050691 containerd[1767]: time="2026-01-20T01:42:47.050661812Z" level=info msg="StartContainer for \"3baac17586f189ebba3fb40b1fb0cd9dc5c1d2e974f6c1398fa09d112b01ab9f\" returns successfully" Jan 20 01:42:47.092405 containerd[1767]: time="2026-01-20T01:42:47.092365371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bwmf9,Uid:8a0da74e-5da6-4d17-baef-898cc44d92e7,Namespace:calico-system,Attempt:1,} returns sandbox id \"6efab69d4106f47057bb042750913f99919af88e074c28df4bd98a7f101b8444\"" Jan 20 01:42:47.095753 containerd[1767]: time="2026-01-20T01:42:47.095723811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 01:42:47.344334 containerd[1767]: time="2026-01-20T01:42:47.344222164Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:42:47.347212 containerd[1767]: time="2026-01-20T01:42:47.347085083Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 01:42:47.347212 containerd[1767]: time="2026-01-20T01:42:47.347181563Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 01:42:47.349978 kubelet[3178]: E0120 01:42:47.347331 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:42:47.349978 kubelet[3178]: E0120 01:42:47.349956 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:42:47.350302 kubelet[3178]: E0120 01:42:47.350099 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7nmdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bwmf9_calico-system(8a0da74e-5da6-4d17-baef-898cc44d92e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 01:42:47.351809 kubelet[3178]: E0120 01:42:47.351598 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bwmf9" podUID="8a0da74e-5da6-4d17-baef-898cc44d92e7" Jan 20 01:42:47.569728 containerd[1767]: time="2026-01-20T01:42:47.569379517Z" level=info msg="StopPodSandbox for \"8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7\"" Jan 20 01:42:47.569728 containerd[1767]: time="2026-01-20T01:42:47.569436797Z" level=info msg="StopPodSandbox for \"35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809\"" Jan 20 01:42:47.571238 containerd[1767]: time="2026-01-20T01:42:47.571208157Z" level=info msg="StopPodSandbox for \"f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a\"" Jan 20 01:42:47.710281 containerd[1767]: 2026-01-20 01:42:47.645 [INFO][5000] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" Jan 20 01:42:47.710281 containerd[1767]: 2026-01-20 01:42:47.647 [INFO][5000] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" iface="eth0" netns="/var/run/netns/cni-e89b3a7c-e022-1949-aa4e-3b328d3d041a" Jan 20 01:42:47.710281 containerd[1767]: 2026-01-20 01:42:47.651 [INFO][5000] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" iface="eth0" netns="/var/run/netns/cni-e89b3a7c-e022-1949-aa4e-3b328d3d041a" Jan 20 01:42:47.710281 containerd[1767]: 2026-01-20 01:42:47.651 [INFO][5000] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" iface="eth0" netns="/var/run/netns/cni-e89b3a7c-e022-1949-aa4e-3b328d3d041a" Jan 20 01:42:47.710281 containerd[1767]: 2026-01-20 01:42:47.651 [INFO][5000] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" Jan 20 01:42:47.710281 containerd[1767]: 2026-01-20 01:42:47.651 [INFO][5000] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" Jan 20 01:42:47.710281 containerd[1767]: 2026-01-20 01:42:47.691 [INFO][5027] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" HandleID="k8s-pod-network.35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--kube--controllers--cd848bd58--mqt54-eth0" Jan 20 01:42:47.710281 containerd[1767]: 2026-01-20 01:42:47.691 [INFO][5027] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:47.710281 containerd[1767]: 2026-01-20 01:42:47.691 [INFO][5027] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:47.710281 containerd[1767]: 2026-01-20 01:42:47.705 [WARNING][5027] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" HandleID="k8s-pod-network.35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--kube--controllers--cd848bd58--mqt54-eth0" Jan 20 01:42:47.710281 containerd[1767]: 2026-01-20 01:42:47.705 [INFO][5027] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" HandleID="k8s-pod-network.35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--kube--controllers--cd848bd58--mqt54-eth0" Jan 20 01:42:47.710281 containerd[1767]: 2026-01-20 01:42:47.706 [INFO][5027] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:47.710281 containerd[1767]: 2026-01-20 01:42:47.708 [INFO][5000] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" Jan 20 01:42:47.712114 containerd[1767]: time="2026-01-20T01:42:47.711971432Z" level=info msg="TearDown network for sandbox \"35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809\" successfully" Jan 20 01:42:47.712114 containerd[1767]: time="2026-01-20T01:42:47.712008432Z" level=info msg="StopPodSandbox for \"35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809\" returns successfully" Jan 20 01:42:47.714584 containerd[1767]: time="2026-01-20T01:42:47.714393152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd848bd58-mqt54,Uid:f24ea947-18f0-4003-bcc7-bb3d7376a6ba,Namespace:calico-system,Attempt:1,}" Jan 20 01:42:47.715241 systemd[1]: run-netns-cni\x2de89b3a7c\x2de022\x2d1949\x2daa4e\x2d3b328d3d041a.mount: Deactivated successfully. Jan 20 01:42:47.724505 containerd[1767]: 2026-01-20 01:42:47.640 [INFO][5005] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" Jan 20 01:42:47.724505 containerd[1767]: 2026-01-20 01:42:47.641 [INFO][5005] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" iface="eth0" netns="/var/run/netns/cni-814ec1ec-b9f4-876a-1171-da0ed7376a16" Jan 20 01:42:47.724505 containerd[1767]: 2026-01-20 01:42:47.642 [INFO][5005] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" iface="eth0" netns="/var/run/netns/cni-814ec1ec-b9f4-876a-1171-da0ed7376a16" Jan 20 01:42:47.724505 containerd[1767]: 2026-01-20 01:42:47.643 [INFO][5005] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" iface="eth0" netns="/var/run/netns/cni-814ec1ec-b9f4-876a-1171-da0ed7376a16" Jan 20 01:42:47.724505 containerd[1767]: 2026-01-20 01:42:47.643 [INFO][5005] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" Jan 20 01:42:47.724505 containerd[1767]: 2026-01-20 01:42:47.643 [INFO][5005] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" Jan 20 01:42:47.724505 containerd[1767]: 2026-01-20 01:42:47.703 [INFO][5019] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" HandleID="k8s-pod-network.f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--mvhh6-eth0" Jan 20 01:42:47.724505 containerd[1767]: 2026-01-20 01:42:47.703 [INFO][5019] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:47.724505 containerd[1767]: 2026-01-20 01:42:47.706 [INFO][5019] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:47.724505 containerd[1767]: 2026-01-20 01:42:47.719 [WARNING][5019] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" HandleID="k8s-pod-network.f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--mvhh6-eth0" Jan 20 01:42:47.724505 containerd[1767]: 2026-01-20 01:42:47.719 [INFO][5019] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" HandleID="k8s-pod-network.f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--mvhh6-eth0" Jan 20 01:42:47.724505 containerd[1767]: 2026-01-20 01:42:47.720 [INFO][5019] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:47.724505 containerd[1767]: 2026-01-20 01:42:47.722 [INFO][5005] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" Jan 20 01:42:47.726997 containerd[1767]: time="2026-01-20T01:42:47.726960152Z" level=info msg="TearDown network for sandbox \"f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a\" successfully" Jan 20 01:42:47.726997 containerd[1767]: time="2026-01-20T01:42:47.726990072Z" level=info msg="StopPodSandbox for \"f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a\" returns successfully" Jan 20 01:42:47.727861 containerd[1767]: time="2026-01-20T01:42:47.727827472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9899c86f9-mvhh6,Uid:0e893c04-8b87-47f0-b2a8-981971a5bfc3,Namespace:calico-apiserver,Attempt:1,}" Jan 20 01:42:47.728812 systemd[1]: run-netns-cni\x2d814ec1ec\x2db9f4\x2d876a\x2d1171\x2dda0ed7376a16.mount: Deactivated successfully. Jan 20 01:42:47.737580 kubelet[3178]: E0120 01:42:47.736525 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bwmf9" podUID="8a0da74e-5da6-4d17-baef-898cc44d92e7" Jan 20 01:42:47.748175 containerd[1767]: 2026-01-20 01:42:47.645 [INFO][4999] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" Jan 20 01:42:47.748175 containerd[1767]: 2026-01-20 01:42:47.646 [INFO][4999] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" iface="eth0" netns="/var/run/netns/cni-d723fc5f-5997-fcb8-166f-51f6aa619ff0" Jan 20 01:42:47.748175 containerd[1767]: 2026-01-20 01:42:47.646 [INFO][4999] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" iface="eth0" netns="/var/run/netns/cni-d723fc5f-5997-fcb8-166f-51f6aa619ff0" Jan 20 01:42:47.748175 containerd[1767]: 2026-01-20 01:42:47.649 [INFO][4999] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" iface="eth0" netns="/var/run/netns/cni-d723fc5f-5997-fcb8-166f-51f6aa619ff0" Jan 20 01:42:47.748175 containerd[1767]: 2026-01-20 01:42:47.649 [INFO][4999] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" Jan 20 01:42:47.748175 containerd[1767]: 2026-01-20 01:42:47.649 [INFO][4999] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" Jan 20 01:42:47.748175 containerd[1767]: 2026-01-20 01:42:47.704 [INFO][5022] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" HandleID="k8s-pod-network.8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" Workload="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--5t5k4-eth0" Jan 20 01:42:47.748175 containerd[1767]: 2026-01-20 01:42:47.704 [INFO][5022] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:47.748175 containerd[1767]: 2026-01-20 01:42:47.721 [INFO][5022] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:47.748175 containerd[1767]: 2026-01-20 01:42:47.738 [WARNING][5022] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" HandleID="k8s-pod-network.8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" Workload="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--5t5k4-eth0" Jan 20 01:42:47.748175 containerd[1767]: 2026-01-20 01:42:47.738 [INFO][5022] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" HandleID="k8s-pod-network.8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" Workload="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--5t5k4-eth0" Jan 20 01:42:47.748175 containerd[1767]: 2026-01-20 01:42:47.741 [INFO][5022] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:47.748175 containerd[1767]: 2026-01-20 01:42:47.745 [INFO][4999] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" Jan 20 01:42:47.754344 containerd[1767]: time="2026-01-20T01:42:47.749542471Z" level=info msg="TearDown network for sandbox \"8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7\" successfully" Jan 20 01:42:47.754344 containerd[1767]: time="2026-01-20T01:42:47.749571671Z" level=info msg="StopPodSandbox for \"8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7\" returns successfully" Jan 20 01:42:47.752273 systemd[1]: run-netns-cni\x2dd723fc5f\x2d5997\x2dfcb8\x2d166f\x2d51f6aa619ff0.mount: Deactivated successfully. Jan 20 01:42:47.760231 containerd[1767]: time="2026-01-20T01:42:47.759436391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5t5k4,Uid:5b059e7e-b61a-45ee-b787-908d675a8c0c,Namespace:kube-system,Attempt:1,}" Jan 20 01:42:47.764416 kubelet[3178]: I0120 01:42:47.762614 3178 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jj6z8" podStartSLOduration=42.762599671 podStartE2EDuration="42.762599671s" podCreationTimestamp="2026-01-20 01:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:42:47.760764391 +0000 UTC m=+47.294713640" watchObservedRunningTime="2026-01-20 01:42:47.762599671 +0000 UTC m=+47.296548920" Jan 20 01:42:47.970452 systemd-networkd[1367]: calia2e5cb6de89: Link UP Jan 20 01:42:47.973835 systemd-networkd[1367]: calia2e5cb6de89: Gained carrier Jan 20 01:42:47.995698 containerd[1767]: 2026-01-20 01:42:47.805 [INFO][5042] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:42:47.995698 containerd[1767]: 2026-01-20 01:42:47.826 [INFO][5042] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--0046389dc1-k8s-calico--kube--controllers--cd848bd58--mqt54-eth0 calico-kube-controllers-cd848bd58- calico-system f24ea947-18f0-4003-bcc7-bb3d7376a6ba 955 0 2026-01-20 01:42:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:cd848bd58 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-0046389dc1 calico-kube-controllers-cd848bd58-mqt54 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia2e5cb6de89 [] [] }} ContainerID="e330e4f371ba640d5c2be1db60fba6c667eb2765b69503878b1c42936ea1d474" Namespace="calico-system" Pod="calico-kube-controllers-cd848bd58-mqt54" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-calico--kube--controllers--cd848bd58--mqt54-" Jan 20 01:42:47.995698 containerd[1767]: 2026-01-20 01:42:47.826 [INFO][5042] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e330e4f371ba640d5c2be1db60fba6c667eb2765b69503878b1c42936ea1d474" Namespace="calico-system" Pod="calico-kube-controllers-cd848bd58-mqt54" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-calico--kube--controllers--cd848bd58--mqt54-eth0" Jan 20 01:42:47.995698 containerd[1767]: 2026-01-20 01:42:47.891 [INFO][5078] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e330e4f371ba640d5c2be1db60fba6c667eb2765b69503878b1c42936ea1d474" HandleID="k8s-pod-network.e330e4f371ba640d5c2be1db60fba6c667eb2765b69503878b1c42936ea1d474" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--kube--controllers--cd848bd58--mqt54-eth0" Jan 20 01:42:47.995698 containerd[1767]: 2026-01-20 01:42:47.893 [INFO][5078] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e330e4f371ba640d5c2be1db60fba6c667eb2765b69503878b1c42936ea1d474" HandleID="k8s-pod-network.e330e4f371ba640d5c2be1db60fba6c667eb2765b69503878b1c42936ea1d474" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--kube--controllers--cd848bd58--mqt54-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab3a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-0046389dc1", "pod":"calico-kube-controllers-cd848bd58-mqt54", "timestamp":"2026-01-20 01:42:47.891794947 +0000 UTC"}, Hostname:"ci-4081.3.6-n-0046389dc1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:42:47.995698 containerd[1767]: 2026-01-20 01:42:47.893 [INFO][5078] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:47.995698 containerd[1767]: 2026-01-20 01:42:47.893 [INFO][5078] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:47.995698 containerd[1767]: 2026-01-20 01:42:47.893 [INFO][5078] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-0046389dc1' Jan 20 01:42:47.995698 containerd[1767]: 2026-01-20 01:42:47.916 [INFO][5078] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e330e4f371ba640d5c2be1db60fba6c667eb2765b69503878b1c42936ea1d474" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:47.995698 containerd[1767]: 2026-01-20 01:42:47.925 [INFO][5078] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:47.995698 containerd[1767]: 2026-01-20 01:42:47.937 [INFO][5078] ipam/ipam.go 511: Trying affinity for 192.168.28.128/26 host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:47.995698 containerd[1767]: 2026-01-20 01:42:47.941 [INFO][5078] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.128/26 host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:47.995698 containerd[1767]: 2026-01-20 01:42:47.945 [INFO][5078] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.128/26 host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:47.995698 containerd[1767]: 2026-01-20 01:42:47.946 [INFO][5078] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.28.128/26 handle="k8s-pod-network.e330e4f371ba640d5c2be1db60fba6c667eb2765b69503878b1c42936ea1d474" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:47.995698 containerd[1767]: 2026-01-20 01:42:47.948 [INFO][5078] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e330e4f371ba640d5c2be1db60fba6c667eb2765b69503878b1c42936ea1d474 Jan 20 01:42:47.995698 containerd[1767]: 2026-01-20 01:42:47.958 [INFO][5078] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.28.128/26 handle="k8s-pod-network.e330e4f371ba640d5c2be1db60fba6c667eb2765b69503878b1c42936ea1d474" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:47.995698 containerd[1767]: 2026-01-20 01:42:47.965 [INFO][5078] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.28.132/26] block=192.168.28.128/26 handle="k8s-pod-network.e330e4f371ba640d5c2be1db60fba6c667eb2765b69503878b1c42936ea1d474" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:47.995698 containerd[1767]: 2026-01-20 01:42:47.965 [INFO][5078] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.132/26] handle="k8s-pod-network.e330e4f371ba640d5c2be1db60fba6c667eb2765b69503878b1c42936ea1d474" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:47.995698 containerd[1767]: 2026-01-20 01:42:47.965 [INFO][5078] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:47.995698 containerd[1767]: 2026-01-20 01:42:47.965 [INFO][5078] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.28.132/26] IPv6=[] ContainerID="e330e4f371ba640d5c2be1db60fba6c667eb2765b69503878b1c42936ea1d474" HandleID="k8s-pod-network.e330e4f371ba640d5c2be1db60fba6c667eb2765b69503878b1c42936ea1d474" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--kube--controllers--cd848bd58--mqt54-eth0" Jan 20 01:42:47.997824 containerd[1767]: 2026-01-20 01:42:47.968 [INFO][5042] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e330e4f371ba640d5c2be1db60fba6c667eb2765b69503878b1c42936ea1d474" Namespace="calico-system" Pod="calico-kube-controllers-cd848bd58-mqt54" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-calico--kube--controllers--cd848bd58--mqt54-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0046389dc1-k8s-calico--kube--controllers--cd848bd58--mqt54-eth0", GenerateName:"calico-kube-controllers-cd848bd58-", Namespace:"calico-system", SelfLink:"", UID:"f24ea947-18f0-4003-bcc7-bb3d7376a6ba", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cd848bd58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0046389dc1", ContainerID:"", Pod:"calico-kube-controllers-cd848bd58-mqt54", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.28.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia2e5cb6de89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:47.997824 containerd[1767]: 2026-01-20 01:42:47.968 [INFO][5042] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.132/32] ContainerID="e330e4f371ba640d5c2be1db60fba6c667eb2765b69503878b1c42936ea1d474" Namespace="calico-system" Pod="calico-kube-controllers-cd848bd58-mqt54" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-calico--kube--controllers--cd848bd58--mqt54-eth0" Jan 20 01:42:47.997824 containerd[1767]: 2026-01-20 01:42:47.968 [INFO][5042] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2e5cb6de89 ContainerID="e330e4f371ba640d5c2be1db60fba6c667eb2765b69503878b1c42936ea1d474" Namespace="calico-system" Pod="calico-kube-controllers-cd848bd58-mqt54" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-calico--kube--controllers--cd848bd58--mqt54-eth0" Jan 20 01:42:47.997824 containerd[1767]: 2026-01-20 01:42:47.974 [INFO][5042] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e330e4f371ba640d5c2be1db60fba6c667eb2765b69503878b1c42936ea1d474" Namespace="calico-system" Pod="calico-kube-controllers-cd848bd58-mqt54" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-calico--kube--controllers--cd848bd58--mqt54-eth0" Jan 20 01:42:47.997824 containerd[1767]: 2026-01-20 01:42:47.976 [INFO][5042] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e330e4f371ba640d5c2be1db60fba6c667eb2765b69503878b1c42936ea1d474" Namespace="calico-system" Pod="calico-kube-controllers-cd848bd58-mqt54" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-calico--kube--controllers--cd848bd58--mqt54-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0046389dc1-k8s-calico--kube--controllers--cd848bd58--mqt54-eth0", GenerateName:"calico-kube-controllers-cd848bd58-", Namespace:"calico-system", SelfLink:"", UID:"f24ea947-18f0-4003-bcc7-bb3d7376a6ba", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cd848bd58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0046389dc1", ContainerID:"e330e4f371ba640d5c2be1db60fba6c667eb2765b69503878b1c42936ea1d474", Pod:"calico-kube-controllers-cd848bd58-mqt54", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.28.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia2e5cb6de89", MAC:"8e:ab:1a:ef:9f:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:47.997824 containerd[1767]: 2026-01-20 01:42:47.993 [INFO][5042] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e330e4f371ba640d5c2be1db60fba6c667eb2765b69503878b1c42936ea1d474" Namespace="calico-system" Pod="calico-kube-controllers-cd848bd58-mqt54" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-calico--kube--controllers--cd848bd58--mqt54-eth0" Jan 20 01:42:48.030047 containerd[1767]: time="2026-01-20T01:42:48.029717383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:42:48.030734 containerd[1767]: time="2026-01-20T01:42:48.030680823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:42:48.032531 containerd[1767]: time="2026-01-20T01:42:48.030957143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:48.032928 containerd[1767]: time="2026-01-20T01:42:48.032832143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:48.061390 systemd[1]: Started cri-containerd-e330e4f371ba640d5c2be1db60fba6c667eb2765b69503878b1c42936ea1d474.scope - libcontainer container e330e4f371ba640d5c2be1db60fba6c667eb2765b69503878b1c42936ea1d474. Jan 20 01:42:48.068884 systemd-networkd[1367]: cali89bbcf37339: Link UP Jan 20 01:42:48.069427 systemd-networkd[1367]: cali89bbcf37339: Gained carrier Jan 20 01:42:48.089954 containerd[1767]: 2026-01-20 01:42:47.858 [INFO][5066] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:42:48.089954 containerd[1767]: 2026-01-20 01:42:47.882 [INFO][5066] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--5t5k4-eth0 coredns-668d6bf9bc- kube-system 5b059e7e-b61a-45ee-b787-908d675a8c0c 956 0 2026-01-20 01:42:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-0046389dc1 coredns-668d6bf9bc-5t5k4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali89bbcf37339 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="eb7804dde2f9979acfc5ba6ca01256ed20199093487f849d14775a0dfae4ac0b" Namespace="kube-system" Pod="coredns-668d6bf9bc-5t5k4" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--5t5k4-" Jan 20 01:42:48.089954 containerd[1767]: 2026-01-20 01:42:47.883 [INFO][5066] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eb7804dde2f9979acfc5ba6ca01256ed20199093487f849d14775a0dfae4ac0b" Namespace="kube-system" Pod="coredns-668d6bf9bc-5t5k4" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--5t5k4-eth0" Jan 20 01:42:48.089954 containerd[1767]: 2026-01-20 01:42:47.924 [INFO][5089] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb7804dde2f9979acfc5ba6ca01256ed20199093487f849d14775a0dfae4ac0b" HandleID="k8s-pod-network.eb7804dde2f9979acfc5ba6ca01256ed20199093487f849d14775a0dfae4ac0b" Workload="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--5t5k4-eth0" Jan 20 01:42:48.089954 containerd[1767]: 2026-01-20 01:42:47.925 [INFO][5089] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="eb7804dde2f9979acfc5ba6ca01256ed20199093487f849d14775a0dfae4ac0b" HandleID="k8s-pod-network.eb7804dde2f9979acfc5ba6ca01256ed20199093487f849d14775a0dfae4ac0b" Workload="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--5t5k4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-0046389dc1", "pod":"coredns-668d6bf9bc-5t5k4", "timestamp":"2026-01-20 01:42:47.924797546 +0000 UTC"}, Hostname:"ci-4081.3.6-n-0046389dc1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:42:48.089954 containerd[1767]: 2026-01-20 01:42:47.925 [INFO][5089] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:48.089954 containerd[1767]: 2026-01-20 01:42:47.965 [INFO][5089] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:48.089954 containerd[1767]: 2026-01-20 01:42:47.965 [INFO][5089] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-0046389dc1' Jan 20 01:42:48.089954 containerd[1767]: 2026-01-20 01:42:48.016 [INFO][5089] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eb7804dde2f9979acfc5ba6ca01256ed20199093487f849d14775a0dfae4ac0b" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:48.089954 containerd[1767]: 2026-01-20 01:42:48.022 [INFO][5089] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:48.089954 containerd[1767]: 2026-01-20 01:42:48.033 [INFO][5089] ipam/ipam.go 511: Trying affinity for 192.168.28.128/26 host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:48.089954 containerd[1767]: 2026-01-20 01:42:48.035 [INFO][5089] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.128/26 host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:48.089954 containerd[1767]: 2026-01-20 01:42:48.037 [INFO][5089] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.128/26 host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:48.089954 containerd[1767]: 2026-01-20 01:42:48.037 [INFO][5089] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.28.128/26 handle="k8s-pod-network.eb7804dde2f9979acfc5ba6ca01256ed20199093487f849d14775a0dfae4ac0b" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:48.089954 containerd[1767]: 2026-01-20 01:42:48.042 [INFO][5089] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.eb7804dde2f9979acfc5ba6ca01256ed20199093487f849d14775a0dfae4ac0b Jan 20 01:42:48.089954 containerd[1767]: 2026-01-20 01:42:48.048 [INFO][5089] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.28.128/26 handle="k8s-pod-network.eb7804dde2f9979acfc5ba6ca01256ed20199093487f849d14775a0dfae4ac0b" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:48.089954 containerd[1767]: 2026-01-20 01:42:48.059 [INFO][5089] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.28.133/26] block=192.168.28.128/26 handle="k8s-pod-network.eb7804dde2f9979acfc5ba6ca01256ed20199093487f849d14775a0dfae4ac0b" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:48.089954 containerd[1767]: 2026-01-20 01:42:48.059 [INFO][5089] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.133/26] handle="k8s-pod-network.eb7804dde2f9979acfc5ba6ca01256ed20199093487f849d14775a0dfae4ac0b" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:48.089954 containerd[1767]: 2026-01-20 01:42:48.059 [INFO][5089] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:48.089954 containerd[1767]: 2026-01-20 01:42:48.059 [INFO][5089] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.28.133/26] IPv6=[] ContainerID="eb7804dde2f9979acfc5ba6ca01256ed20199093487f849d14775a0dfae4ac0b" HandleID="k8s-pod-network.eb7804dde2f9979acfc5ba6ca01256ed20199093487f849d14775a0dfae4ac0b" Workload="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--5t5k4-eth0" Jan 20 01:42:48.091185 containerd[1767]: 2026-01-20 01:42:48.065 [INFO][5066] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eb7804dde2f9979acfc5ba6ca01256ed20199093487f849d14775a0dfae4ac0b" Namespace="kube-system" Pod="coredns-668d6bf9bc-5t5k4" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--5t5k4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--5t5k4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5b059e7e-b61a-45ee-b787-908d675a8c0c", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0046389dc1", ContainerID:"", Pod:"coredns-668d6bf9bc-5t5k4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali89bbcf37339", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:48.091185 containerd[1767]: 2026-01-20 01:42:48.066 [INFO][5066] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.133/32] ContainerID="eb7804dde2f9979acfc5ba6ca01256ed20199093487f849d14775a0dfae4ac0b" Namespace="kube-system" Pod="coredns-668d6bf9bc-5t5k4" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--5t5k4-eth0" Jan 20 01:42:48.091185 containerd[1767]: 2026-01-20 01:42:48.066 [INFO][5066] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali89bbcf37339 ContainerID="eb7804dde2f9979acfc5ba6ca01256ed20199093487f849d14775a0dfae4ac0b" Namespace="kube-system" Pod="coredns-668d6bf9bc-5t5k4" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--5t5k4-eth0" Jan 20 01:42:48.091185 containerd[1767]: 2026-01-20 01:42:48.070 [INFO][5066] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb7804dde2f9979acfc5ba6ca01256ed20199093487f849d14775a0dfae4ac0b" Namespace="kube-system" Pod="coredns-668d6bf9bc-5t5k4" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--5t5k4-eth0" Jan 20 01:42:48.091185 containerd[1767]: 2026-01-20 01:42:48.070 [INFO][5066] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eb7804dde2f9979acfc5ba6ca01256ed20199093487f849d14775a0dfae4ac0b" Namespace="kube-system" Pod="coredns-668d6bf9bc-5t5k4" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--5t5k4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--5t5k4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5b059e7e-b61a-45ee-b787-908d675a8c0c", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0046389dc1", ContainerID:"eb7804dde2f9979acfc5ba6ca01256ed20199093487f849d14775a0dfae4ac0b", Pod:"coredns-668d6bf9bc-5t5k4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali89bbcf37339", MAC:"36:00:74:a5:35:a8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:48.091185 containerd[1767]: 2026-01-20 01:42:48.086 [INFO][5066] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eb7804dde2f9979acfc5ba6ca01256ed20199093487f849d14775a0dfae4ac0b" Namespace="kube-system" Pod="coredns-668d6bf9bc-5t5k4" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--5t5k4-eth0" Jan 20 01:42:48.111742 containerd[1767]: time="2026-01-20T01:42:48.111672620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:42:48.112443 containerd[1767]: time="2026-01-20T01:42:48.112254180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:42:48.112443 containerd[1767]: time="2026-01-20T01:42:48.112336340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:48.112571 containerd[1767]: time="2026-01-20T01:42:48.112429140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:48.139612 containerd[1767]: time="2026-01-20T01:42:48.139548900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd848bd58-mqt54,Uid:f24ea947-18f0-4003-bcc7-bb3d7376a6ba,Namespace:calico-system,Attempt:1,} returns sandbox id \"e330e4f371ba640d5c2be1db60fba6c667eb2765b69503878b1c42936ea1d474\"" Jan 20 01:42:48.145311 containerd[1767]: time="2026-01-20T01:42:48.145118459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 01:42:48.146661 systemd[1]: Started cri-containerd-eb7804dde2f9979acfc5ba6ca01256ed20199093487f849d14775a0dfae4ac0b.scope - libcontainer container eb7804dde2f9979acfc5ba6ca01256ed20199093487f849d14775a0dfae4ac0b. Jan 20 01:42:48.176505 systemd-networkd[1367]: cali59c695ce7f3: Link UP Jan 20 01:42:48.177317 systemd-networkd[1367]: cali59c695ce7f3: Gained carrier Jan 20 01:42:48.189073 systemd-networkd[1367]: cali228f47d438a: Gained IPv6LL Jan 20 01:42:48.197513 containerd[1767]: time="2026-01-20T01:42:48.197475018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5t5k4,Uid:5b059e7e-b61a-45ee-b787-908d675a8c0c,Namespace:kube-system,Attempt:1,} returns sandbox id \"eb7804dde2f9979acfc5ba6ca01256ed20199093487f849d14775a0dfae4ac0b\"" Jan 20 01:42:48.205137 containerd[1767]: time="2026-01-20T01:42:48.204826378Z" level=info msg="CreateContainer within sandbox \"eb7804dde2f9979acfc5ba6ca01256ed20199093487f849d14775a0dfae4ac0b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 01:42:48.205830 containerd[1767]: 2026-01-20 01:42:47.879 [INFO][5054] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:42:48.205830 containerd[1767]: 2026-01-20 01:42:47.916 [INFO][5054] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--mvhh6-eth0 calico-apiserver-9899c86f9- calico-apiserver 0e893c04-8b87-47f0-b2a8-981971a5bfc3 954 0 2026-01-20 01:42:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9899c86f9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-0046389dc1 calico-apiserver-9899c86f9-mvhh6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali59c695ce7f3 [] [] }} ContainerID="0c2582bcbe2ee6f766a8ebb1d482a0edbfff8b08559ac0013ea6850bce2bf8d0" Namespace="calico-apiserver" Pod="calico-apiserver-9899c86f9-mvhh6" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--mvhh6-" Jan 20 01:42:48.205830 containerd[1767]: 2026-01-20 01:42:47.917 [INFO][5054] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0c2582bcbe2ee6f766a8ebb1d482a0edbfff8b08559ac0013ea6850bce2bf8d0" Namespace="calico-apiserver" Pod="calico-apiserver-9899c86f9-mvhh6" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--mvhh6-eth0" Jan 20 01:42:48.205830 containerd[1767]: 2026-01-20 01:42:47.960 [INFO][5097] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c2582bcbe2ee6f766a8ebb1d482a0edbfff8b08559ac0013ea6850bce2bf8d0" HandleID="k8s-pod-network.0c2582bcbe2ee6f766a8ebb1d482a0edbfff8b08559ac0013ea6850bce2bf8d0" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--mvhh6-eth0" Jan 20 01:42:48.205830 containerd[1767]: 2026-01-20 01:42:47.960 [INFO][5097] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0c2582bcbe2ee6f766a8ebb1d482a0edbfff8b08559ac0013ea6850bce2bf8d0" HandleID="k8s-pod-network.0c2582bcbe2ee6f766a8ebb1d482a0edbfff8b08559ac0013ea6850bce2bf8d0" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--mvhh6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3220), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-0046389dc1", "pod":"calico-apiserver-9899c86f9-mvhh6", "timestamp":"2026-01-20 01:42:47.960623185 +0000 UTC"}, Hostname:"ci-4081.3.6-n-0046389dc1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:42:48.205830 containerd[1767]: 2026-01-20 01:42:47.960 [INFO][5097] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:48.205830 containerd[1767]: 2026-01-20 01:42:48.060 [INFO][5097] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:48.205830 containerd[1767]: 2026-01-20 01:42:48.060 [INFO][5097] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-0046389dc1' Jan 20 01:42:48.205830 containerd[1767]: 2026-01-20 01:42:48.116 [INFO][5097] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0c2582bcbe2ee6f766a8ebb1d482a0edbfff8b08559ac0013ea6850bce2bf8d0" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:48.205830 containerd[1767]: 2026-01-20 01:42:48.123 [INFO][5097] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:48.205830 containerd[1767]: 2026-01-20 01:42:48.134 [INFO][5097] ipam/ipam.go 511: Trying affinity for 192.168.28.128/26 host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:48.205830 containerd[1767]: 2026-01-20 01:42:48.137 [INFO][5097] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.128/26 host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:48.205830 containerd[1767]: 2026-01-20 01:42:48.143 [INFO][5097] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.128/26 host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:48.205830 containerd[1767]: 2026-01-20 01:42:48.144 [INFO][5097] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.28.128/26 handle="k8s-pod-network.0c2582bcbe2ee6f766a8ebb1d482a0edbfff8b08559ac0013ea6850bce2bf8d0" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:48.205830 containerd[1767]: 2026-01-20 01:42:48.151 [INFO][5097] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0c2582bcbe2ee6f766a8ebb1d482a0edbfff8b08559ac0013ea6850bce2bf8d0 Jan 20 01:42:48.205830 containerd[1767]: 2026-01-20 01:42:48.158 [INFO][5097] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.28.128/26 handle="k8s-pod-network.0c2582bcbe2ee6f766a8ebb1d482a0edbfff8b08559ac0013ea6850bce2bf8d0" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:48.205830 containerd[1767]: 2026-01-20 01:42:48.170 [INFO][5097] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.28.134/26] block=192.168.28.128/26 handle="k8s-pod-network.0c2582bcbe2ee6f766a8ebb1d482a0edbfff8b08559ac0013ea6850bce2bf8d0" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:48.205830 containerd[1767]: 2026-01-20 01:42:48.170 [INFO][5097] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.134/26] handle="k8s-pod-network.0c2582bcbe2ee6f766a8ebb1d482a0edbfff8b08559ac0013ea6850bce2bf8d0" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:48.205830 containerd[1767]: 2026-01-20 01:42:48.170 [INFO][5097] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:48.205830 containerd[1767]: 2026-01-20 01:42:48.170 [INFO][5097] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.28.134/26] IPv6=[] ContainerID="0c2582bcbe2ee6f766a8ebb1d482a0edbfff8b08559ac0013ea6850bce2bf8d0" HandleID="k8s-pod-network.0c2582bcbe2ee6f766a8ebb1d482a0edbfff8b08559ac0013ea6850bce2bf8d0" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--mvhh6-eth0" Jan 20 01:42:48.206666 containerd[1767]: 2026-01-20 01:42:48.173 [INFO][5054] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0c2582bcbe2ee6f766a8ebb1d482a0edbfff8b08559ac0013ea6850bce2bf8d0" Namespace="calico-apiserver" Pod="calico-apiserver-9899c86f9-mvhh6" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--mvhh6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--mvhh6-eth0", GenerateName:"calico-apiserver-9899c86f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"0e893c04-8b87-47f0-b2a8-981971a5bfc3", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9899c86f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0046389dc1", ContainerID:"", Pod:"calico-apiserver-9899c86f9-mvhh6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali59c695ce7f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:48.206666 containerd[1767]: 2026-01-20 01:42:48.173 [INFO][5054] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.134/32] ContainerID="0c2582bcbe2ee6f766a8ebb1d482a0edbfff8b08559ac0013ea6850bce2bf8d0" Namespace="calico-apiserver" Pod="calico-apiserver-9899c86f9-mvhh6" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--mvhh6-eth0" Jan 20 01:42:48.206666 containerd[1767]: 2026-01-20 01:42:48.173 [INFO][5054] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali59c695ce7f3 ContainerID="0c2582bcbe2ee6f766a8ebb1d482a0edbfff8b08559ac0013ea6850bce2bf8d0" Namespace="calico-apiserver" Pod="calico-apiserver-9899c86f9-mvhh6" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--mvhh6-eth0" Jan 20 01:42:48.206666 containerd[1767]: 2026-01-20 01:42:48.178 [INFO][5054] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0c2582bcbe2ee6f766a8ebb1d482a0edbfff8b08559ac0013ea6850bce2bf8d0" Namespace="calico-apiserver" Pod="calico-apiserver-9899c86f9-mvhh6" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--mvhh6-eth0" Jan 20 01:42:48.206666 containerd[1767]: 2026-01-20 01:42:48.180 [INFO][5054] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0c2582bcbe2ee6f766a8ebb1d482a0edbfff8b08559ac0013ea6850bce2bf8d0" Namespace="calico-apiserver" Pod="calico-apiserver-9899c86f9-mvhh6" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--mvhh6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--mvhh6-eth0", GenerateName:"calico-apiserver-9899c86f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"0e893c04-8b87-47f0-b2a8-981971a5bfc3", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9899c86f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0046389dc1", ContainerID:"0c2582bcbe2ee6f766a8ebb1d482a0edbfff8b08559ac0013ea6850bce2bf8d0", Pod:"calico-apiserver-9899c86f9-mvhh6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali59c695ce7f3", MAC:"2e:b6:97:5a:49:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:48.206666 containerd[1767]: 2026-01-20 01:42:48.201 [INFO][5054] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0c2582bcbe2ee6f766a8ebb1d482a0edbfff8b08559ac0013ea6850bce2bf8d0" Namespace="calico-apiserver" Pod="calico-apiserver-9899c86f9-mvhh6" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--mvhh6-eth0" Jan 20 01:42:48.239552 containerd[1767]: time="2026-01-20T01:42:48.239447177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:42:48.240010 containerd[1767]: time="2026-01-20T01:42:48.239571417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:42:48.240010 containerd[1767]: time="2026-01-20T01:42:48.239604737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:48.240103 containerd[1767]: time="2026-01-20T01:42:48.240020497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:48.248669 containerd[1767]: time="2026-01-20T01:42:48.248635216Z" level=info msg="CreateContainer within sandbox \"eb7804dde2f9979acfc5ba6ca01256ed20199093487f849d14775a0dfae4ac0b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"924a0b702418959f65080a059101be5f3b772a4fc22fe2bf969f8b6b0117a7e3\"" Jan 20 01:42:48.250582 containerd[1767]: time="2026-01-20T01:42:48.250553376Z" level=info msg="StartContainer for \"924a0b702418959f65080a059101be5f3b772a4fc22fe2bf969f8b6b0117a7e3\"" Jan 20 01:42:48.255363 systemd[1]: Started cri-containerd-0c2582bcbe2ee6f766a8ebb1d482a0edbfff8b08559ac0013ea6850bce2bf8d0.scope - libcontainer container 0c2582bcbe2ee6f766a8ebb1d482a0edbfff8b08559ac0013ea6850bce2bf8d0. Jan 20 01:42:48.276040 systemd[1]: Started cri-containerd-924a0b702418959f65080a059101be5f3b772a4fc22fe2bf969f8b6b0117a7e3.scope - libcontainer container 924a0b702418959f65080a059101be5f3b772a4fc22fe2bf969f8b6b0117a7e3. Jan 20 01:42:48.300806 containerd[1767]: time="2026-01-20T01:42:48.300578135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9899c86f9-mvhh6,Uid:0e893c04-8b87-47f0-b2a8-981971a5bfc3,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0c2582bcbe2ee6f766a8ebb1d482a0edbfff8b08559ac0013ea6850bce2bf8d0\"" Jan 20 01:42:48.318817 containerd[1767]: time="2026-01-20T01:42:48.318774494Z" level=info msg="StartContainer for \"924a0b702418959f65080a059101be5f3b772a4fc22fe2bf969f8b6b0117a7e3\" returns successfully" Jan 20 01:42:48.443773 containerd[1767]: time="2026-01-20T01:42:48.443728410Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:42:48.446994 containerd[1767]: time="2026-01-20T01:42:48.446952970Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 01:42:48.447164 containerd[1767]: time="2026-01-20T01:42:48.447047970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 01:42:48.447238 kubelet[3178]: E0120 01:42:48.447188 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:42:48.448135 kubelet[3178]: E0120 01:42:48.447236 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:42:48.448135 kubelet[3178]: E0120 01:42:48.447791 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7n7lc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-cd848bd58-mqt54_calico-system(f24ea947-18f0-4003-bcc7-bb3d7376a6ba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 01:42:48.448297 containerd[1767]: time="2026-01-20T01:42:48.447521530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:42:48.449234 kubelet[3178]: E0120 01:42:48.449185 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cd848bd58-mqt54" podUID="f24ea947-18f0-4003-bcc7-bb3d7376a6ba" Jan 20 01:42:48.712501 containerd[1767]: time="2026-01-20T01:42:48.712275122Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:42:48.715473 containerd[1767]: time="2026-01-20T01:42:48.715276522Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:42:48.715473 containerd[1767]: time="2026-01-20T01:42:48.715300482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:42:48.715618 kubelet[3178]: E0120 01:42:48.715493 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:42:48.715618 kubelet[3178]: E0120 01:42:48.715545 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:42:48.716049 kubelet[3178]: E0120 01:42:48.715668 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zh4m7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9899c86f9-mvhh6_calico-apiserver(0e893c04-8b87-47f0-b2a8-981971a5bfc3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:42:48.717330 kubelet[3178]: E0120 01:42:48.717256 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9899c86f9-mvhh6" podUID="0e893c04-8b87-47f0-b2a8-981971a5bfc3" Jan 20 01:42:48.742113 kubelet[3178]: E0120 01:42:48.741910 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cd848bd58-mqt54" podUID="f24ea947-18f0-4003-bcc7-bb3d7376a6ba" Jan 20 01:42:48.745422 kubelet[3178]: E0120 01:42:48.744824 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9899c86f9-mvhh6" podUID="0e893c04-8b87-47f0-b2a8-981971a5bfc3" Jan 20 01:42:48.745422 kubelet[3178]: E0120 01:42:48.745384 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bwmf9" podUID="8a0da74e-5da6-4d17-baef-898cc44d92e7" Jan 20 01:42:48.755534 kubelet[3178]: I0120 01:42:48.755314 3178 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5t5k4" podStartSLOduration=43.755301401 podStartE2EDuration="43.755301401s" podCreationTimestamp="2026-01-20 01:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:42:48.755094881 +0000 UTC m=+48.289044090" watchObservedRunningTime="2026-01-20 01:42:48.755301401 +0000 UTC m=+48.289250650" Jan 20 01:42:48.957060 systemd-networkd[1367]: cali22174637fbe: Gained IPv6LL Jan 20 01:42:49.149044 systemd-networkd[1367]: cali89bbcf37339: Gained IPv6LL Jan 20 01:42:49.668038 kubelet[3178]: I0120 01:42:49.667911 3178 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 01:42:49.749467 kubelet[3178]: E0120 01:42:49.749427 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9899c86f9-mvhh6" podUID="0e893c04-8b87-47f0-b2a8-981971a5bfc3" Jan 20 01:42:49.750584 kubelet[3178]: E0120 01:42:49.750540 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cd848bd58-mqt54" podUID="f24ea947-18f0-4003-bcc7-bb3d7376a6ba" Jan 20 01:42:49.854928 kernel: bpftool[5345]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 20 01:42:49.917028 systemd-networkd[1367]: calia2e5cb6de89: Gained IPv6LL Jan 20 01:42:50.063622 systemd-networkd[1367]: vxlan.calico: Link UP Jan 20 01:42:50.063630 systemd-networkd[1367]: vxlan.calico: Gained carrier Jan 20 01:42:50.109022 systemd-networkd[1367]: cali59c695ce7f3: Gained IPv6LL Jan 20 01:42:50.570917 containerd[1767]: time="2026-01-20T01:42:50.570391507Z" level=info msg="StopPodSandbox for \"54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421\"" Jan 20 01:42:50.572223 containerd[1767]: time="2026-01-20T01:42:50.571070986Z" level=info msg="StopPodSandbox for \"c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f\"" Jan 20 01:42:50.694356 containerd[1767]: 2026-01-20 01:42:50.627 [INFO][5440] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" Jan 20 01:42:50.694356 containerd[1767]: 2026-01-20 01:42:50.628 [INFO][5440] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" iface="eth0" netns="/var/run/netns/cni-563d6a5f-0d1b-abe9-bd9a-8856d9c13501" Jan 20 01:42:50.694356 containerd[1767]: 2026-01-20 01:42:50.628 [INFO][5440] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" iface="eth0" netns="/var/run/netns/cni-563d6a5f-0d1b-abe9-bd9a-8856d9c13501" Jan 20 01:42:50.694356 containerd[1767]: 2026-01-20 01:42:50.629 [INFO][5440] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" iface="eth0" netns="/var/run/netns/cni-563d6a5f-0d1b-abe9-bd9a-8856d9c13501" Jan 20 01:42:50.694356 containerd[1767]: 2026-01-20 01:42:50.629 [INFO][5440] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" Jan 20 01:42:50.694356 containerd[1767]: 2026-01-20 01:42:50.629 [INFO][5440] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" Jan 20 01:42:50.694356 containerd[1767]: 2026-01-20 01:42:50.672 [INFO][5452] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" HandleID="k8s-pod-network.c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" Workload="ci--4081.3.6--n--0046389dc1-k8s-csi--node--driver--n8z22-eth0" Jan 20 01:42:50.694356 containerd[1767]: 2026-01-20 01:42:50.673 [INFO][5452] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:50.694356 containerd[1767]: 2026-01-20 01:42:50.673 [INFO][5452] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:50.694356 containerd[1767]: 2026-01-20 01:42:50.686 [WARNING][5452] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" HandleID="k8s-pod-network.c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" Workload="ci--4081.3.6--n--0046389dc1-k8s-csi--node--driver--n8z22-eth0" Jan 20 01:42:50.694356 containerd[1767]: 2026-01-20 01:42:50.686 [INFO][5452] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" HandleID="k8s-pod-network.c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" Workload="ci--4081.3.6--n--0046389dc1-k8s-csi--node--driver--n8z22-eth0" Jan 20 01:42:50.694356 containerd[1767]: 2026-01-20 01:42:50.688 [INFO][5452] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:50.694356 containerd[1767]: 2026-01-20 01:42:50.691 [INFO][5440] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" Jan 20 01:42:50.696107 containerd[1767]: time="2026-01-20T01:42:50.695978333Z" level=info msg="TearDown network for sandbox \"c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f\" successfully" Jan 20 01:42:50.696107 containerd[1767]: time="2026-01-20T01:42:50.696009173Z" level=info msg="StopPodSandbox for \"c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f\" returns successfully" Jan 20 01:42:50.697407 containerd[1767]: time="2026-01-20T01:42:50.697044973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n8z22,Uid:e68b55a2-bd34-4f7b-b8c9-be9ad16a2026,Namespace:calico-system,Attempt:1,}" Jan 20 01:42:50.700578 systemd[1]: run-netns-cni\x2d563d6a5f\x2d0d1b\x2dabe9\x2dbd9a\x2d8856d9c13501.mount: Deactivated successfully. Jan 20 01:42:50.710695 containerd[1767]: 2026-01-20 01:42:50.633 [INFO][5439] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" Jan 20 01:42:50.710695 containerd[1767]: 2026-01-20 01:42:50.634 [INFO][5439] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" iface="eth0" netns="/var/run/netns/cni-f1cac19f-22b1-6cd9-656a-a207cebd449c" Jan 20 01:42:50.710695 containerd[1767]: 2026-01-20 01:42:50.634 [INFO][5439] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" iface="eth0" netns="/var/run/netns/cni-f1cac19f-22b1-6cd9-656a-a207cebd449c" Jan 20 01:42:50.710695 containerd[1767]: 2026-01-20 01:42:50.635 [INFO][5439] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" iface="eth0" netns="/var/run/netns/cni-f1cac19f-22b1-6cd9-656a-a207cebd449c" Jan 20 01:42:50.710695 containerd[1767]: 2026-01-20 01:42:50.635 [INFO][5439] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" Jan 20 01:42:50.710695 containerd[1767]: 2026-01-20 01:42:50.635 [INFO][5439] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" Jan 20 01:42:50.710695 containerd[1767]: 2026-01-20 01:42:50.683 [INFO][5458] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" HandleID="k8s-pod-network.54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--7d975bd6cf--mbc2x-eth0" Jan 20 01:42:50.710695 containerd[1767]: 2026-01-20 01:42:50.684 [INFO][5458] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:50.710695 containerd[1767]: 2026-01-20 01:42:50.688 [INFO][5458] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:50.710695 containerd[1767]: 2026-01-20 01:42:50.704 [WARNING][5458] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" HandleID="k8s-pod-network.54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--7d975bd6cf--mbc2x-eth0" Jan 20 01:42:50.710695 containerd[1767]: 2026-01-20 01:42:50.704 [INFO][5458] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" HandleID="k8s-pod-network.54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--7d975bd6cf--mbc2x-eth0" Jan 20 01:42:50.710695 containerd[1767]: 2026-01-20 01:42:50.705 [INFO][5458] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:50.710695 containerd[1767]: 2026-01-20 01:42:50.708 [INFO][5439] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" Jan 20 01:42:50.710695 containerd[1767]: time="2026-01-20T01:42:50.710567611Z" level=info msg="TearDown network for sandbox \"54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421\" successfully" Jan 20 01:42:50.710695 containerd[1767]: time="2026-01-20T01:42:50.710586411Z" level=info msg="StopPodSandbox for \"54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421\" returns successfully" Jan 20 01:42:50.712872 containerd[1767]: time="2026-01-20T01:42:50.711477090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d975bd6cf-mbc2x,Uid:0e8d57de-35ca-4ff1-828c-b0edcfa72a11,Namespace:calico-apiserver,Attempt:1,}" Jan 20 01:42:50.715415 systemd[1]: run-netns-cni\x2df1cac19f\x2d22b1\x2d6cd9\x2d656a\x2da207cebd449c.mount: Deactivated successfully. Jan 20 01:42:50.936769 systemd-networkd[1367]: cali866d07bb205: Link UP Jan 20 01:42:50.938641 systemd-networkd[1367]: cali866d07bb205: Gained carrier Jan 20 01:42:50.958858 containerd[1767]: 2026-01-20 01:42:50.829 [INFO][5486] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--7d975bd6cf--mbc2x-eth0 calico-apiserver-7d975bd6cf- calico-apiserver 0e8d57de-35ca-4ff1-828c-b0edcfa72a11 1030 0 2026-01-20 01:42:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d975bd6cf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-0046389dc1 calico-apiserver-7d975bd6cf-mbc2x eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali866d07bb205 [] [] }} ContainerID="cb00aea0ff3fe069d2862ee9100a6369c9a398d71f74edaafeeae6e1ab02e196" Namespace="calico-apiserver" Pod="calico-apiserver-7d975bd6cf-mbc2x" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--7d975bd6cf--mbc2x-" Jan 20 01:42:50.958858 containerd[1767]: 2026-01-20 01:42:50.830 [INFO][5486] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cb00aea0ff3fe069d2862ee9100a6369c9a398d71f74edaafeeae6e1ab02e196" Namespace="calico-apiserver" Pod="calico-apiserver-7d975bd6cf-mbc2x" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--7d975bd6cf--mbc2x-eth0" Jan 20 01:42:50.958858 containerd[1767]: 2026-01-20 01:42:50.886 [INFO][5506] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cb00aea0ff3fe069d2862ee9100a6369c9a398d71f74edaafeeae6e1ab02e196" HandleID="k8s-pod-network.cb00aea0ff3fe069d2862ee9100a6369c9a398d71f74edaafeeae6e1ab02e196" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--7d975bd6cf--mbc2x-eth0" Jan 20 01:42:50.958858 containerd[1767]: 2026-01-20 01:42:50.887 [INFO][5506] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cb00aea0ff3fe069d2862ee9100a6369c9a398d71f74edaafeeae6e1ab02e196" HandleID="k8s-pod-network.cb00aea0ff3fe069d2862ee9100a6369c9a398d71f74edaafeeae6e1ab02e196" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--7d975bd6cf--mbc2x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-0046389dc1", "pod":"calico-apiserver-7d975bd6cf-mbc2x", "timestamp":"2026-01-20 01:42:50.886690784 +0000 UTC"}, Hostname:"ci-4081.3.6-n-0046389dc1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:42:50.958858 containerd[1767]: 2026-01-20 01:42:50.887 [INFO][5506] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:50.958858 containerd[1767]: 2026-01-20 01:42:50.888 [INFO][5506] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:50.958858 containerd[1767]: 2026-01-20 01:42:50.888 [INFO][5506] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-0046389dc1' Jan 20 01:42:50.958858 containerd[1767]: 2026-01-20 01:42:50.899 [INFO][5506] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cb00aea0ff3fe069d2862ee9100a6369c9a398d71f74edaafeeae6e1ab02e196" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:50.958858 containerd[1767]: 2026-01-20 01:42:50.903 [INFO][5506] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:50.958858 containerd[1767]: 2026-01-20 01:42:50.908 [INFO][5506] ipam/ipam.go 511: Trying affinity for 192.168.28.128/26 host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:50.958858 containerd[1767]: 2026-01-20 01:42:50.910 [INFO][5506] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.128/26 host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:50.958858 containerd[1767]: 2026-01-20 01:42:50.913 [INFO][5506] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.128/26 host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:50.958858 containerd[1767]: 2026-01-20 01:42:50.913 [INFO][5506] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.28.128/26 handle="k8s-pod-network.cb00aea0ff3fe069d2862ee9100a6369c9a398d71f74edaafeeae6e1ab02e196" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:50.958858 containerd[1767]: 2026-01-20 01:42:50.915 [INFO][5506] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cb00aea0ff3fe069d2862ee9100a6369c9a398d71f74edaafeeae6e1ab02e196 Jan 20 01:42:50.958858 containerd[1767]: 2026-01-20 01:42:50.920 [INFO][5506] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.28.128/26 handle="k8s-pod-network.cb00aea0ff3fe069d2862ee9100a6369c9a398d71f74edaafeeae6e1ab02e196" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:50.958858 containerd[1767]: 2026-01-20 01:42:50.929 [INFO][5506] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.28.135/26] block=192.168.28.128/26 handle="k8s-pod-network.cb00aea0ff3fe069d2862ee9100a6369c9a398d71f74edaafeeae6e1ab02e196" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:50.958858 containerd[1767]: 2026-01-20 01:42:50.929 [INFO][5506] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.135/26] handle="k8s-pod-network.cb00aea0ff3fe069d2862ee9100a6369c9a398d71f74edaafeeae6e1ab02e196" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:50.958858 containerd[1767]: 2026-01-20 01:42:50.929 [INFO][5506] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:50.958858 containerd[1767]: 2026-01-20 01:42:50.929 [INFO][5506] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.28.135/26] IPv6=[] ContainerID="cb00aea0ff3fe069d2862ee9100a6369c9a398d71f74edaafeeae6e1ab02e196" HandleID="k8s-pod-network.cb00aea0ff3fe069d2862ee9100a6369c9a398d71f74edaafeeae6e1ab02e196" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--7d975bd6cf--mbc2x-eth0" Jan 20 01:42:50.959685 containerd[1767]: 2026-01-20 01:42:50.932 [INFO][5486] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cb00aea0ff3fe069d2862ee9100a6369c9a398d71f74edaafeeae6e1ab02e196" Namespace="calico-apiserver" Pod="calico-apiserver-7d975bd6cf-mbc2x" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--7d975bd6cf--mbc2x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--7d975bd6cf--mbc2x-eth0", GenerateName:"calico-apiserver-7d975bd6cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"0e8d57de-35ca-4ff1-828c-b0edcfa72a11", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d975bd6cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0046389dc1", ContainerID:"", Pod:"calico-apiserver-7d975bd6cf-mbc2x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali866d07bb205", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:50.959685 containerd[1767]: 2026-01-20 01:42:50.932 [INFO][5486] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.135/32] ContainerID="cb00aea0ff3fe069d2862ee9100a6369c9a398d71f74edaafeeae6e1ab02e196" Namespace="calico-apiserver" Pod="calico-apiserver-7d975bd6cf-mbc2x" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--7d975bd6cf--mbc2x-eth0" Jan 20 01:42:50.959685 containerd[1767]: 2026-01-20 01:42:50.932 [INFO][5486] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali866d07bb205 ContainerID="cb00aea0ff3fe069d2862ee9100a6369c9a398d71f74edaafeeae6e1ab02e196" Namespace="calico-apiserver" Pod="calico-apiserver-7d975bd6cf-mbc2x" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--7d975bd6cf--mbc2x-eth0" Jan 20 01:42:50.959685 containerd[1767]: 2026-01-20 01:42:50.936 [INFO][5486] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cb00aea0ff3fe069d2862ee9100a6369c9a398d71f74edaafeeae6e1ab02e196" Namespace="calico-apiserver" Pod="calico-apiserver-7d975bd6cf-mbc2x" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--7d975bd6cf--mbc2x-eth0" Jan 20 01:42:50.959685 containerd[1767]: 2026-01-20 01:42:50.939 [INFO][5486] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cb00aea0ff3fe069d2862ee9100a6369c9a398d71f74edaafeeae6e1ab02e196" Namespace="calico-apiserver" Pod="calico-apiserver-7d975bd6cf-mbc2x" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--7d975bd6cf--mbc2x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--7d975bd6cf--mbc2x-eth0", GenerateName:"calico-apiserver-7d975bd6cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"0e8d57de-35ca-4ff1-828c-b0edcfa72a11", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d975bd6cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0046389dc1", ContainerID:"cb00aea0ff3fe069d2862ee9100a6369c9a398d71f74edaafeeae6e1ab02e196", Pod:"calico-apiserver-7d975bd6cf-mbc2x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali866d07bb205", MAC:"a2:ee:3f:5b:55:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:50.959685 containerd[1767]: 2026-01-20 01:42:50.953 [INFO][5486] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cb00aea0ff3fe069d2862ee9100a6369c9a398d71f74edaafeeae6e1ab02e196" Namespace="calico-apiserver" Pod="calico-apiserver-7d975bd6cf-mbc2x" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--7d975bd6cf--mbc2x-eth0" Jan 20 01:42:50.980040 containerd[1767]: time="2026-01-20T01:42:50.978307850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:42:50.980040 containerd[1767]: time="2026-01-20T01:42:50.978370730Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:42:50.980040 containerd[1767]: time="2026-01-20T01:42:50.978381210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:50.980040 containerd[1767]: time="2026-01-20T01:42:50.978468170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:50.997064 systemd[1]: Started cri-containerd-cb00aea0ff3fe069d2862ee9100a6369c9a398d71f74edaafeeae6e1ab02e196.scope - libcontainer container cb00aea0ff3fe069d2862ee9100a6369c9a398d71f74edaafeeae6e1ab02e196. Jan 20 01:42:51.041579 containerd[1767]: time="2026-01-20T01:42:51.041322001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d975bd6cf-mbc2x,Uid:0e8d57de-35ca-4ff1-828c-b0edcfa72a11,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"cb00aea0ff3fe069d2862ee9100a6369c9a398d71f74edaafeeae6e1ab02e196\"" Jan 20 01:42:51.046597 containerd[1767]: time="2026-01-20T01:42:51.046480880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:42:51.050917 systemd-networkd[1367]: cali7c784e4199b: Link UP Jan 20 01:42:51.055105 systemd-networkd[1367]: cali7c784e4199b: Gained carrier Jan 20 01:42:51.078600 containerd[1767]: 2026-01-20 01:42:50.834 [INFO][5476] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--0046389dc1-k8s-csi--node--driver--n8z22-eth0 csi-node-driver- calico-system e68b55a2-bd34-4f7b-b8c9-be9ad16a2026 1029 0 2026-01-20 01:42:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-0046389dc1 csi-node-driver-n8z22 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7c784e4199b [] [] }} ContainerID="17351907d5202416b05165ccf4ee0d4af4e8515c9e53fd9a06b93c0d95f0e01a" Namespace="calico-system" Pod="csi-node-driver-n8z22" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-csi--node--driver--n8z22-" Jan 20 01:42:51.078600 containerd[1767]: 2026-01-20 01:42:50.834 [INFO][5476] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="17351907d5202416b05165ccf4ee0d4af4e8515c9e53fd9a06b93c0d95f0e01a" Namespace="calico-system" Pod="csi-node-driver-n8z22" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-csi--node--driver--n8z22-eth0" Jan 20 01:42:51.078600 containerd[1767]: 2026-01-20 01:42:50.890 [INFO][5511] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="17351907d5202416b05165ccf4ee0d4af4e8515c9e53fd9a06b93c0d95f0e01a" HandleID="k8s-pod-network.17351907d5202416b05165ccf4ee0d4af4e8515c9e53fd9a06b93c0d95f0e01a" Workload="ci--4081.3.6--n--0046389dc1-k8s-csi--node--driver--n8z22-eth0" Jan 20 01:42:51.078600 containerd[1767]: 2026-01-20 01:42:50.890 [INFO][5511] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="17351907d5202416b05165ccf4ee0d4af4e8515c9e53fd9a06b93c0d95f0e01a" HandleID="k8s-pod-network.17351907d5202416b05165ccf4ee0d4af4e8515c9e53fd9a06b93c0d95f0e01a" Workload="ci--4081.3.6--n--0046389dc1-k8s-csi--node--driver--n8z22-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3d50), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-0046389dc1", "pod":"csi-node-driver-n8z22", "timestamp":"2026-01-20 01:42:50.890547904 +0000 UTC"}, Hostname:"ci-4081.3.6-n-0046389dc1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:42:51.078600 containerd[1767]: 2026-01-20 01:42:50.890 [INFO][5511] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:51.078600 containerd[1767]: 2026-01-20 01:42:50.929 [INFO][5511] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:51.078600 containerd[1767]: 2026-01-20 01:42:50.929 [INFO][5511] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-0046389dc1' Jan 20 01:42:51.078600 containerd[1767]: 2026-01-20 01:42:51.001 [INFO][5511] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.17351907d5202416b05165ccf4ee0d4af4e8515c9e53fd9a06b93c0d95f0e01a" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:51.078600 containerd[1767]: 2026-01-20 01:42:51.006 [INFO][5511] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:51.078600 containerd[1767]: 2026-01-20 01:42:51.014 [INFO][5511] ipam/ipam.go 511: Trying affinity for 192.168.28.128/26 host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:51.078600 containerd[1767]: 2026-01-20 01:42:51.016 [INFO][5511] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.128/26 host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:51.078600 containerd[1767]: 2026-01-20 01:42:51.018 [INFO][5511] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.128/26 host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:51.078600 containerd[1767]: 2026-01-20 01:42:51.018 [INFO][5511] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.28.128/26 handle="k8s-pod-network.17351907d5202416b05165ccf4ee0d4af4e8515c9e53fd9a06b93c0d95f0e01a" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:51.078600 containerd[1767]: 2026-01-20 01:42:51.019 [INFO][5511] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.17351907d5202416b05165ccf4ee0d4af4e8515c9e53fd9a06b93c0d95f0e01a Jan 20 01:42:51.078600 containerd[1767]: 2026-01-20 01:42:51.028 [INFO][5511] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.28.128/26 handle="k8s-pod-network.17351907d5202416b05165ccf4ee0d4af4e8515c9e53fd9a06b93c0d95f0e01a" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:51.078600 containerd[1767]: 2026-01-20 01:42:51.043 [INFO][5511] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.28.136/26] block=192.168.28.128/26 handle="k8s-pod-network.17351907d5202416b05165ccf4ee0d4af4e8515c9e53fd9a06b93c0d95f0e01a" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:51.078600 containerd[1767]: 2026-01-20 01:42:51.043 [INFO][5511] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.136/26] handle="k8s-pod-network.17351907d5202416b05165ccf4ee0d4af4e8515c9e53fd9a06b93c0d95f0e01a" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:51.078600 containerd[1767]: 2026-01-20 01:42:51.043 [INFO][5511] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:51.078600 containerd[1767]: 2026-01-20 01:42:51.043 [INFO][5511] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.28.136/26] IPv6=[] ContainerID="17351907d5202416b05165ccf4ee0d4af4e8515c9e53fd9a06b93c0d95f0e01a" HandleID="k8s-pod-network.17351907d5202416b05165ccf4ee0d4af4e8515c9e53fd9a06b93c0d95f0e01a" Workload="ci--4081.3.6--n--0046389dc1-k8s-csi--node--driver--n8z22-eth0" Jan 20 01:42:51.079168 containerd[1767]: 2026-01-20 01:42:51.047 [INFO][5476] cni-plugin/k8s.go 418: Populated endpoint ContainerID="17351907d5202416b05165ccf4ee0d4af4e8515c9e53fd9a06b93c0d95f0e01a" Namespace="calico-system" Pod="csi-node-driver-n8z22" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-csi--node--driver--n8z22-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0046389dc1-k8s-csi--node--driver--n8z22-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e68b55a2-bd34-4f7b-b8c9-be9ad16a2026", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0046389dc1", ContainerID:"", Pod:"csi-node-driver-n8z22", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.28.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7c784e4199b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:51.079168 containerd[1767]: 2026-01-20 01:42:51.047 [INFO][5476] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.136/32] ContainerID="17351907d5202416b05165ccf4ee0d4af4e8515c9e53fd9a06b93c0d95f0e01a" Namespace="calico-system" Pod="csi-node-driver-n8z22" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-csi--node--driver--n8z22-eth0" Jan 20 01:42:51.079168 containerd[1767]: 2026-01-20 01:42:51.047 [INFO][5476] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c784e4199b ContainerID="17351907d5202416b05165ccf4ee0d4af4e8515c9e53fd9a06b93c0d95f0e01a" Namespace="calico-system" Pod="csi-node-driver-n8z22" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-csi--node--driver--n8z22-eth0" Jan 20 01:42:51.079168 containerd[1767]: 2026-01-20 01:42:51.053 [INFO][5476] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="17351907d5202416b05165ccf4ee0d4af4e8515c9e53fd9a06b93c0d95f0e01a" Namespace="calico-system" Pod="csi-node-driver-n8z22" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-csi--node--driver--n8z22-eth0" Jan 20 01:42:51.079168 containerd[1767]: 2026-01-20 01:42:51.053 [INFO][5476] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="17351907d5202416b05165ccf4ee0d4af4e8515c9e53fd9a06b93c0d95f0e01a" Namespace="calico-system" Pod="csi-node-driver-n8z22" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-csi--node--driver--n8z22-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0046389dc1-k8s-csi--node--driver--n8z22-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e68b55a2-bd34-4f7b-b8c9-be9ad16a2026", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0046389dc1", ContainerID:"17351907d5202416b05165ccf4ee0d4af4e8515c9e53fd9a06b93c0d95f0e01a", Pod:"csi-node-driver-n8z22", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.28.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7c784e4199b", MAC:"66:91:66:4b:06:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:51.079168 containerd[1767]: 2026-01-20 01:42:51.074 [INFO][5476] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="17351907d5202416b05165ccf4ee0d4af4e8515c9e53fd9a06b93c0d95f0e01a" Namespace="calico-system" Pod="csi-node-driver-n8z22" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-csi--node--driver--n8z22-eth0" Jan 20 01:42:51.098602 containerd[1767]: time="2026-01-20T01:42:51.098093112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:42:51.098602 containerd[1767]: time="2026-01-20T01:42:51.098153672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:42:51.098602 containerd[1767]: time="2026-01-20T01:42:51.098173112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:51.098602 containerd[1767]: time="2026-01-20T01:42:51.098251352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:51.112188 systemd[1]: Started cri-containerd-17351907d5202416b05165ccf4ee0d4af4e8515c9e53fd9a06b93c0d95f0e01a.scope - libcontainer container 17351907d5202416b05165ccf4ee0d4af4e8515c9e53fd9a06b93c0d95f0e01a. Jan 20 01:42:51.131607 containerd[1767]: time="2026-01-20T01:42:51.131521187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n8z22,Uid:e68b55a2-bd34-4f7b-b8c9-be9ad16a2026,Namespace:calico-system,Attempt:1,} returns sandbox id \"17351907d5202416b05165ccf4ee0d4af4e8515c9e53fd9a06b93c0d95f0e01a\"" Jan 20 01:42:51.347920 containerd[1767]: time="2026-01-20T01:42:51.347527515Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:42:51.349888 containerd[1767]: time="2026-01-20T01:42:51.349758834Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:42:51.349888 containerd[1767]: time="2026-01-20T01:42:51.349840794Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:42:51.350630 kubelet[3178]: E0120 01:42:51.350106 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:42:51.350630 kubelet[3178]: E0120 01:42:51.350169 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:42:51.350630 kubelet[3178]: E0120 01:42:51.350378 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-27gzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d975bd6cf-mbc2x_calico-apiserver(0e8d57de-35ca-4ff1-828c-b0edcfa72a11): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:42:51.351741 containerd[1767]: time="2026-01-20T01:42:51.351299674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 01:42:51.351801 kubelet[3178]: E0120 01:42:51.351669 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d975bd6cf-mbc2x" podUID="0e8d57de-35ca-4ff1-828c-b0edcfa72a11" Jan 20 01:42:51.569305 containerd[1767]: time="2026-01-20T01:42:51.569261161Z" level=info msg="StopPodSandbox for \"86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570\"" Jan 20 01:42:51.592248 containerd[1767]: time="2026-01-20T01:42:51.592203798Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:42:51.596286 containerd[1767]: time="2026-01-20T01:42:51.596235877Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 01:42:51.596367 containerd[1767]: time="2026-01-20T01:42:51.596345917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 01:42:51.596788 kubelet[3178]: E0120 01:42:51.596750 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:42:51.596849 kubelet[3178]: E0120 01:42:51.596800 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:42:51.599126 kubelet[3178]: E0120 01:42:51.598849 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-65nqh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n8z22_calico-system(e68b55a2-bd34-4f7b-b8c9-be9ad16a2026): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 01:42:51.602301 containerd[1767]: time="2026-01-20T01:42:51.602271236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 01:42:51.668863 containerd[1767]: 2026-01-20 01:42:51.625 [INFO][5649] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" Jan 20 01:42:51.668863 containerd[1767]: 2026-01-20 01:42:51.627 [INFO][5649] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" iface="eth0" netns="/var/run/netns/cni-00c7e369-48f3-f65c-479a-79fdbeb492d9" Jan 20 01:42:51.668863 containerd[1767]: 2026-01-20 01:42:51.627 [INFO][5649] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" iface="eth0" netns="/var/run/netns/cni-00c7e369-48f3-f65c-479a-79fdbeb492d9" Jan 20 01:42:51.668863 containerd[1767]: 2026-01-20 01:42:51.627 [INFO][5649] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" iface="eth0" netns="/var/run/netns/cni-00c7e369-48f3-f65c-479a-79fdbeb492d9" Jan 20 01:42:51.668863 containerd[1767]: 2026-01-20 01:42:51.627 [INFO][5649] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" Jan 20 01:42:51.668863 containerd[1767]: 2026-01-20 01:42:51.627 [INFO][5649] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" Jan 20 01:42:51.668863 containerd[1767]: 2026-01-20 01:42:51.655 [INFO][5656] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" HandleID="k8s-pod-network.86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--k8n9d-eth0" Jan 20 01:42:51.668863 containerd[1767]: 2026-01-20 01:42:51.655 [INFO][5656] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:51.668863 containerd[1767]: 2026-01-20 01:42:51.655 [INFO][5656] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:51.668863 containerd[1767]: 2026-01-20 01:42:51.663 [WARNING][5656] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" HandleID="k8s-pod-network.86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--k8n9d-eth0" Jan 20 01:42:51.668863 containerd[1767]: 2026-01-20 01:42:51.663 [INFO][5656] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" HandleID="k8s-pod-network.86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--k8n9d-eth0" Jan 20 01:42:51.668863 containerd[1767]: 2026-01-20 01:42:51.665 [INFO][5656] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:51.668863 containerd[1767]: 2026-01-20 01:42:51.666 [INFO][5649] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" Jan 20 01:42:51.669552 containerd[1767]: time="2026-01-20T01:42:51.669052426Z" level=info msg="TearDown network for sandbox \"86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570\" successfully" Jan 20 01:42:51.669552 containerd[1767]: time="2026-01-20T01:42:51.669081906Z" level=info msg="StopPodSandbox for \"86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570\" returns successfully" Jan 20 01:42:51.669688 containerd[1767]: time="2026-01-20T01:42:51.669663986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9899c86f9-k8n9d,Uid:106a00f0-806f-4139-9f3e-5722fa42f199,Namespace:calico-apiserver,Attempt:1,}" Jan 20 01:42:51.703578 systemd[1]: run-netns-cni\x2d00c7e369\x2d48f3\x2df65c\x2d479a\x2d79fdbeb492d9.mount: Deactivated successfully. Jan 20 01:42:51.767442 kubelet[3178]: E0120 01:42:51.767383 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d975bd6cf-mbc2x" podUID="0e8d57de-35ca-4ff1-828c-b0edcfa72a11" Jan 20 01:42:51.841996 systemd-networkd[1367]: vxlan.calico: Gained IPv6LL Jan 20 01:42:51.846089 systemd-networkd[1367]: cali71f59f980e8: Link UP Jan 20 01:42:51.846529 systemd-networkd[1367]: cali71f59f980e8: Gained carrier Jan 20 01:42:51.873053 containerd[1767]: time="2026-01-20T01:42:51.872933196Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:42:51.876103 containerd[1767]: time="2026-01-20T01:42:51.876047715Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 01:42:51.876255 containerd[1767]: time="2026-01-20T01:42:51.876233635Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 01:42:51.877149 kubelet[3178]: E0120 01:42:51.877112 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:42:51.877235 kubelet[3178]: E0120 01:42:51.877180 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:42:51.877346 kubelet[3178]: E0120 01:42:51.877279 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-65nqh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n8z22_calico-system(e68b55a2-bd34-4f7b-b8c9-be9ad16a2026): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 01:42:51.878688 kubelet[3178]: E0120 01:42:51.878640 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n8z22" podUID="e68b55a2-bd34-4f7b-b8c9-be9ad16a2026" Jan 20 01:42:51.879420 containerd[1767]: 2026-01-20 01:42:51.741 [INFO][5666] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--k8n9d-eth0 calico-apiserver-9899c86f9- calico-apiserver 106a00f0-806f-4139-9f3e-5722fa42f199 1047 0 2026-01-20 01:42:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9899c86f9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-0046389dc1 calico-apiserver-9899c86f9-k8n9d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali71f59f980e8 [] [] }} ContainerID="62ce0b959fd61da0435635cc90e3cf10544d169d51552080378e40df2bbd7c53" Namespace="calico-apiserver" Pod="calico-apiserver-9899c86f9-k8n9d" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--k8n9d-" Jan 20 01:42:51.879420 containerd[1767]: 2026-01-20 01:42:51.741 [INFO][5666] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="62ce0b959fd61da0435635cc90e3cf10544d169d51552080378e40df2bbd7c53" Namespace="calico-apiserver" Pod="calico-apiserver-9899c86f9-k8n9d" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--k8n9d-eth0" Jan 20 01:42:51.879420 containerd[1767]: 2026-01-20 01:42:51.770 [INFO][5674] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="62ce0b959fd61da0435635cc90e3cf10544d169d51552080378e40df2bbd7c53" HandleID="k8s-pod-network.62ce0b959fd61da0435635cc90e3cf10544d169d51552080378e40df2bbd7c53" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--k8n9d-eth0" Jan 20 01:42:51.879420 containerd[1767]: 2026-01-20 01:42:51.770 [INFO][5674] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="62ce0b959fd61da0435635cc90e3cf10544d169d51552080378e40df2bbd7c53" HandleID="k8s-pod-network.62ce0b959fd61da0435635cc90e3cf10544d169d51552080378e40df2bbd7c53" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--k8n9d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-0046389dc1", "pod":"calico-apiserver-9899c86f9-k8n9d", "timestamp":"2026-01-20 01:42:51.770726371 +0000 UTC"}, Hostname:"ci-4081.3.6-n-0046389dc1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:42:51.879420 containerd[1767]: 2026-01-20 01:42:51.771 [INFO][5674] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:42:51.879420 containerd[1767]: 2026-01-20 01:42:51.771 [INFO][5674] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:42:51.879420 containerd[1767]: 2026-01-20 01:42:51.771 [INFO][5674] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-0046389dc1' Jan 20 01:42:51.879420 containerd[1767]: 2026-01-20 01:42:51.782 [INFO][5674] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.62ce0b959fd61da0435635cc90e3cf10544d169d51552080378e40df2bbd7c53" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:51.879420 containerd[1767]: 2026-01-20 01:42:51.794 [INFO][5674] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:51.879420 containerd[1767]: 2026-01-20 01:42:51.804 [INFO][5674] ipam/ipam.go 511: Trying affinity for 192.168.28.128/26 host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:51.879420 containerd[1767]: 2026-01-20 01:42:51.806 [INFO][5674] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.128/26 host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:51.879420 containerd[1767]: 2026-01-20 01:42:51.808 [INFO][5674] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.128/26 host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:51.879420 containerd[1767]: 2026-01-20 01:42:51.808 [INFO][5674] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.28.128/26 handle="k8s-pod-network.62ce0b959fd61da0435635cc90e3cf10544d169d51552080378e40df2bbd7c53" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:51.879420 containerd[1767]: 2026-01-20 01:42:51.810 [INFO][5674] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.62ce0b959fd61da0435635cc90e3cf10544d169d51552080378e40df2bbd7c53 Jan 20 01:42:51.879420 containerd[1767]: 2026-01-20 01:42:51.819 [INFO][5674] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.28.128/26 handle="k8s-pod-network.62ce0b959fd61da0435635cc90e3cf10544d169d51552080378e40df2bbd7c53" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:51.879420 containerd[1767]: 2026-01-20 01:42:51.831 [INFO][5674] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.28.137/26] block=192.168.28.128/26 handle="k8s-pod-network.62ce0b959fd61da0435635cc90e3cf10544d169d51552080378e40df2bbd7c53" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:51.879420 containerd[1767]: 2026-01-20 01:42:51.831 [INFO][5674] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.137/26] handle="k8s-pod-network.62ce0b959fd61da0435635cc90e3cf10544d169d51552080378e40df2bbd7c53" host="ci-4081.3.6-n-0046389dc1" Jan 20 01:42:51.879420 containerd[1767]: 2026-01-20 01:42:51.831 [INFO][5674] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:42:51.879420 containerd[1767]: 2026-01-20 01:42:51.831 [INFO][5674] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.28.137/26] IPv6=[] ContainerID="62ce0b959fd61da0435635cc90e3cf10544d169d51552080378e40df2bbd7c53" HandleID="k8s-pod-network.62ce0b959fd61da0435635cc90e3cf10544d169d51552080378e40df2bbd7c53" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--k8n9d-eth0" Jan 20 01:42:51.879890 containerd[1767]: 2026-01-20 01:42:51.834 [INFO][5666] cni-plugin/k8s.go 418: Populated endpoint ContainerID="62ce0b959fd61da0435635cc90e3cf10544d169d51552080378e40df2bbd7c53" Namespace="calico-apiserver" Pod="calico-apiserver-9899c86f9-k8n9d" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--k8n9d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--k8n9d-eth0", GenerateName:"calico-apiserver-9899c86f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"106a00f0-806f-4139-9f3e-5722fa42f199", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9899c86f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0046389dc1", ContainerID:"", Pod:"calico-apiserver-9899c86f9-k8n9d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali71f59f980e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:51.879890 containerd[1767]: 2026-01-20 01:42:51.834 [INFO][5666] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.137/32] ContainerID="62ce0b959fd61da0435635cc90e3cf10544d169d51552080378e40df2bbd7c53" Namespace="calico-apiserver" Pod="calico-apiserver-9899c86f9-k8n9d" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--k8n9d-eth0" Jan 20 01:42:51.879890 containerd[1767]: 2026-01-20 01:42:51.834 [INFO][5666] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali71f59f980e8 ContainerID="62ce0b959fd61da0435635cc90e3cf10544d169d51552080378e40df2bbd7c53" Namespace="calico-apiserver" Pod="calico-apiserver-9899c86f9-k8n9d" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--k8n9d-eth0" Jan 20 01:42:51.879890 containerd[1767]: 2026-01-20 01:42:51.846 [INFO][5666] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="62ce0b959fd61da0435635cc90e3cf10544d169d51552080378e40df2bbd7c53" Namespace="calico-apiserver" Pod="calico-apiserver-9899c86f9-k8n9d" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--k8n9d-eth0" Jan 20 01:42:51.879890 containerd[1767]: 2026-01-20 01:42:51.848 [INFO][5666] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="62ce0b959fd61da0435635cc90e3cf10544d169d51552080378e40df2bbd7c53" Namespace="calico-apiserver" Pod="calico-apiserver-9899c86f9-k8n9d" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--k8n9d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--k8n9d-eth0", GenerateName:"calico-apiserver-9899c86f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"106a00f0-806f-4139-9f3e-5722fa42f199", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9899c86f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0046389dc1", ContainerID:"62ce0b959fd61da0435635cc90e3cf10544d169d51552080378e40df2bbd7c53", Pod:"calico-apiserver-9899c86f9-k8n9d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali71f59f980e8", MAC:"c6:0a:c3:ef:ac:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:42:51.879890 containerd[1767]: 2026-01-20 01:42:51.874 [INFO][5666] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="62ce0b959fd61da0435635cc90e3cf10544d169d51552080378e40df2bbd7c53" Namespace="calico-apiserver" Pod="calico-apiserver-9899c86f9-k8n9d" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--k8n9d-eth0" Jan 20 01:42:51.912034 containerd[1767]: time="2026-01-20T01:42:51.910482950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:42:51.912034 containerd[1767]: time="2026-01-20T01:42:51.910557390Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:42:51.912034 containerd[1767]: time="2026-01-20T01:42:51.910572550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:51.912034 containerd[1767]: time="2026-01-20T01:42:51.910663310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:51.947889 systemd[1]: Started cri-containerd-62ce0b959fd61da0435635cc90e3cf10544d169d51552080378e40df2bbd7c53.scope - libcontainer container 62ce0b959fd61da0435635cc90e3cf10544d169d51552080378e40df2bbd7c53. Jan 20 01:42:51.984371 containerd[1767]: time="2026-01-20T01:42:51.984335499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9899c86f9-k8n9d,Uid:106a00f0-806f-4139-9f3e-5722fa42f199,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"62ce0b959fd61da0435635cc90e3cf10544d169d51552080378e40df2bbd7c53\"" Jan 20 01:42:51.987450 containerd[1767]: time="2026-01-20T01:42:51.987126379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:42:52.234560 containerd[1767]: time="2026-01-20T01:42:52.234504061Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:42:52.236971 containerd[1767]: time="2026-01-20T01:42:52.236857221Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:42:52.236971 containerd[1767]: time="2026-01-20T01:42:52.236931621Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:42:52.237185 kubelet[3178]: E0120 01:42:52.237144 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:42:52.237246 kubelet[3178]: E0120 01:42:52.237194 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:42:52.237590 kubelet[3178]: E0120 01:42:52.237359 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rlrwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9899c86f9-k8n9d_calico-apiserver(106a00f0-806f-4139-9f3e-5722fa42f199): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:42:52.238895 kubelet[3178]: E0120 01:42:52.238838 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9899c86f9-k8n9d" podUID="106a00f0-806f-4139-9f3e-5722fa42f199" Jan 20 01:42:52.414040 systemd-networkd[1367]: cali7c784e4199b: Gained IPv6LL Jan 20 01:42:52.477072 systemd-networkd[1367]: cali866d07bb205: Gained IPv6LL Jan 20 01:42:52.772335 kubelet[3178]: E0120 01:42:52.772275 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9899c86f9-k8n9d" podUID="106a00f0-806f-4139-9f3e-5722fa42f199" Jan 20 01:42:52.773310 kubelet[3178]: E0120 01:42:52.773109 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d975bd6cf-mbc2x" podUID="0e8d57de-35ca-4ff1-828c-b0edcfa72a11" Jan 20 01:42:52.774192 kubelet[3178]: E0120 01:42:52.774134 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n8z22" podUID="e68b55a2-bd34-4f7b-b8c9-be9ad16a2026" Jan 20 01:42:53.181090 systemd-networkd[1367]: cali71f59f980e8: Gained IPv6LL Jan 20 01:42:53.774556 kubelet[3178]: E0120 01:42:53.774366 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9899c86f9-k8n9d" podUID="106a00f0-806f-4139-9f3e-5722fa42f199" Jan 20 01:42:54.572418 containerd[1767]: time="2026-01-20T01:42:54.570622110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 01:42:54.818788 containerd[1767]: time="2026-01-20T01:42:54.818731392Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:42:54.821201 containerd[1767]: time="2026-01-20T01:42:54.821169752Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 01:42:54.821271 containerd[1767]: time="2026-01-20T01:42:54.821252312Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 01:42:54.821416 kubelet[3178]: E0120 01:42:54.821366 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:42:54.821701 kubelet[3178]: E0120 01:42:54.821424 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:42:54.821701 kubelet[3178]: E0120 01:42:54.821571 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:496be695959a4924850b56723f0d0926,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dwp7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-bc8ccf7c-bn7qp_calico-system(64bfc6ad-94c7-4fd0-8f5e-dd6a18f5f9f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 01:42:54.824586 containerd[1767]: time="2026-01-20T01:42:54.824248312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 01:42:54.908309 kubelet[3178]: I0120 01:42:54.908267 3178 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 01:42:55.081563 containerd[1767]: time="2026-01-20T01:42:55.081101993Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:42:55.085673 containerd[1767]: time="2026-01-20T01:42:55.085623672Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 01:42:55.085774 containerd[1767]: time="2026-01-20T01:42:55.085741432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 01:42:55.086116 kubelet[3178]: E0120 01:42:55.085945 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:42:55.086116 kubelet[3178]: E0120 01:42:55.086000 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:42:55.086708 kubelet[3178]: E0120 01:42:55.086417 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dwp7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-bc8ccf7c-bn7qp_calico-system(64bfc6ad-94c7-4fd0-8f5e-dd6a18f5f9f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 01:42:55.087999 kubelet[3178]: E0120 01:42:55.087963 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bc8ccf7c-bn7qp" podUID="64bfc6ad-94c7-4fd0-8f5e-dd6a18f5f9f7" Jan 20 01:43:00.575743 containerd[1767]: time="2026-01-20T01:43:00.575701493Z" level=info msg="StopPodSandbox for \"8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7\"" Jan 20 01:43:00.660600 containerd[1767]: 2026-01-20 01:43:00.613 [WARNING][5805] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--5t5k4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5b059e7e-b61a-45ee-b787-908d675a8c0c", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0046389dc1", ContainerID:"eb7804dde2f9979acfc5ba6ca01256ed20199093487f849d14775a0dfae4ac0b", Pod:"coredns-668d6bf9bc-5t5k4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali89bbcf37339", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:00.660600 containerd[1767]: 2026-01-20 01:43:00.614 [INFO][5805] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" Jan 20 01:43:00.660600 containerd[1767]: 2026-01-20 01:43:00.614 [INFO][5805] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" iface="eth0" netns="" Jan 20 01:43:00.660600 containerd[1767]: 2026-01-20 01:43:00.614 [INFO][5805] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" Jan 20 01:43:00.660600 containerd[1767]: 2026-01-20 01:43:00.614 [INFO][5805] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" Jan 20 01:43:00.660600 containerd[1767]: 2026-01-20 01:43:00.640 [INFO][5812] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" HandleID="k8s-pod-network.8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" Workload="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--5t5k4-eth0" Jan 20 01:43:00.660600 containerd[1767]: 2026-01-20 01:43:00.640 [INFO][5812] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:00.660600 containerd[1767]: 2026-01-20 01:43:00.640 [INFO][5812] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:00.660600 containerd[1767]: 2026-01-20 01:43:00.655 [WARNING][5812] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" HandleID="k8s-pod-network.8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" Workload="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--5t5k4-eth0" Jan 20 01:43:00.660600 containerd[1767]: 2026-01-20 01:43:00.655 [INFO][5812] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" HandleID="k8s-pod-network.8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" Workload="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--5t5k4-eth0" Jan 20 01:43:00.660600 containerd[1767]: 2026-01-20 01:43:00.656 [INFO][5812] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:00.660600 containerd[1767]: 2026-01-20 01:43:00.658 [INFO][5805] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" Jan 20 01:43:00.661072 containerd[1767]: time="2026-01-20T01:43:00.660632119Z" level=info msg="TearDown network for sandbox \"8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7\" successfully" Jan 20 01:43:00.661072 containerd[1767]: time="2026-01-20T01:43:00.660664839Z" level=info msg="StopPodSandbox for \"8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7\" returns successfully" Jan 20 01:43:00.661263 containerd[1767]: time="2026-01-20T01:43:00.661237359Z" level=info msg="RemovePodSandbox for \"8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7\"" Jan 20 01:43:00.662660 containerd[1767]: time="2026-01-20T01:43:00.662631318Z" level=info msg="Forcibly stopping sandbox \"8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7\"" Jan 20 01:43:00.732477 containerd[1767]: 2026-01-20 01:43:00.695 [WARNING][5826] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--5t5k4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5b059e7e-b61a-45ee-b787-908d675a8c0c", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0046389dc1", ContainerID:"eb7804dde2f9979acfc5ba6ca01256ed20199093487f849d14775a0dfae4ac0b", Pod:"coredns-668d6bf9bc-5t5k4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali89bbcf37339", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:00.732477 containerd[1767]: 2026-01-20 01:43:00.696 [INFO][5826] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" Jan 20 01:43:00.732477 containerd[1767]: 2026-01-20 01:43:00.696 [INFO][5826] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" iface="eth0" netns="" Jan 20 01:43:00.732477 containerd[1767]: 2026-01-20 01:43:00.696 [INFO][5826] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" Jan 20 01:43:00.732477 containerd[1767]: 2026-01-20 01:43:00.696 [INFO][5826] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" Jan 20 01:43:00.732477 containerd[1767]: 2026-01-20 01:43:00.718 [INFO][5833] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" HandleID="k8s-pod-network.8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" Workload="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--5t5k4-eth0" Jan 20 01:43:00.732477 containerd[1767]: 2026-01-20 01:43:00.718 [INFO][5833] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:00.732477 containerd[1767]: 2026-01-20 01:43:00.718 [INFO][5833] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:00.732477 containerd[1767]: 2026-01-20 01:43:00.726 [WARNING][5833] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" HandleID="k8s-pod-network.8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" Workload="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--5t5k4-eth0" Jan 20 01:43:00.732477 containerd[1767]: 2026-01-20 01:43:00.727 [INFO][5833] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" HandleID="k8s-pod-network.8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" Workload="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--5t5k4-eth0" Jan 20 01:43:00.732477 containerd[1767]: 2026-01-20 01:43:00.728 [INFO][5833] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:00.732477 containerd[1767]: 2026-01-20 01:43:00.730 [INFO][5826] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7" Jan 20 01:43:00.732977 containerd[1767]: time="2026-01-20T01:43:00.732517467Z" level=info msg="TearDown network for sandbox \"8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7\" successfully" Jan 20 01:43:00.739058 containerd[1767]: time="2026-01-20T01:43:00.739019786Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:43:00.739151 containerd[1767]: time="2026-01-20T01:43:00.739107426Z" level=info msg="RemovePodSandbox \"8e18e0bf6034e67a79580b1e843fa9ab54247e76ed9d0f66311afe3b5dbb36c7\" returns successfully" Jan 20 01:43:00.739707 containerd[1767]: time="2026-01-20T01:43:00.739685465Z" level=info msg="StopPodSandbox for \"ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0\"" Jan 20 01:43:00.809820 containerd[1767]: 2026-01-20 01:43:00.772 [WARNING][5847] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-whisker--f5fc59cb9--tchkn-eth0" Jan 20 01:43:00.809820 containerd[1767]: 2026-01-20 01:43:00.772 [INFO][5847] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" Jan 20 01:43:00.809820 containerd[1767]: 2026-01-20 01:43:00.772 [INFO][5847] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" iface="eth0" netns="" Jan 20 01:43:00.809820 containerd[1767]: 2026-01-20 01:43:00.772 [INFO][5847] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" Jan 20 01:43:00.809820 containerd[1767]: 2026-01-20 01:43:00.772 [INFO][5847] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" Jan 20 01:43:00.809820 containerd[1767]: 2026-01-20 01:43:00.794 [INFO][5854] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" HandleID="k8s-pod-network.ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" Workload="ci--4081.3.6--n--0046389dc1-k8s-whisker--f5fc59cb9--tchkn-eth0" Jan 20 01:43:00.809820 containerd[1767]: 2026-01-20 01:43:00.794 [INFO][5854] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:00.809820 containerd[1767]: 2026-01-20 01:43:00.795 [INFO][5854] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:00.809820 containerd[1767]: 2026-01-20 01:43:00.804 [WARNING][5854] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" HandleID="k8s-pod-network.ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" Workload="ci--4081.3.6--n--0046389dc1-k8s-whisker--f5fc59cb9--tchkn-eth0" Jan 20 01:43:00.809820 containerd[1767]: 2026-01-20 01:43:00.804 [INFO][5854] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" HandleID="k8s-pod-network.ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" Workload="ci--4081.3.6--n--0046389dc1-k8s-whisker--f5fc59cb9--tchkn-eth0" Jan 20 01:43:00.809820 containerd[1767]: 2026-01-20 01:43:00.805 [INFO][5854] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:00.809820 containerd[1767]: 2026-01-20 01:43:00.807 [INFO][5847] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" Jan 20 01:43:00.811358 containerd[1767]: time="2026-01-20T01:43:00.809849254Z" level=info msg="TearDown network for sandbox \"ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0\" successfully" Jan 20 01:43:00.811358 containerd[1767]: time="2026-01-20T01:43:00.809870854Z" level=info msg="StopPodSandbox for \"ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0\" returns successfully" Jan 20 01:43:00.811358 containerd[1767]: time="2026-01-20T01:43:00.810345814Z" level=info msg="RemovePodSandbox for \"ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0\"" Jan 20 01:43:00.811358 containerd[1767]: time="2026-01-20T01:43:00.810370694Z" level=info msg="Forcibly stopping sandbox \"ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0\"" Jan 20 01:43:00.891604 containerd[1767]: 2026-01-20 01:43:00.855 [WARNING][5868] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" WorkloadEndpoint="ci--4081.3.6--n--0046389dc1-k8s-whisker--f5fc59cb9--tchkn-eth0" Jan 20 01:43:00.891604 containerd[1767]: 2026-01-20 01:43:00.856 [INFO][5868] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" Jan 20 01:43:00.891604 containerd[1767]: 2026-01-20 01:43:00.856 [INFO][5868] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" iface="eth0" netns="" Jan 20 01:43:00.891604 containerd[1767]: 2026-01-20 01:43:00.856 [INFO][5868] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" Jan 20 01:43:00.891604 containerd[1767]: 2026-01-20 01:43:00.856 [INFO][5868] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" Jan 20 01:43:00.891604 containerd[1767]: 2026-01-20 01:43:00.877 [INFO][5876] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" HandleID="k8s-pod-network.ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" Workload="ci--4081.3.6--n--0046389dc1-k8s-whisker--f5fc59cb9--tchkn-eth0" Jan 20 01:43:00.891604 containerd[1767]: 2026-01-20 01:43:00.877 [INFO][5876] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:00.891604 containerd[1767]: 2026-01-20 01:43:00.877 [INFO][5876] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:00.891604 containerd[1767]: 2026-01-20 01:43:00.886 [WARNING][5876] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" HandleID="k8s-pod-network.ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" Workload="ci--4081.3.6--n--0046389dc1-k8s-whisker--f5fc59cb9--tchkn-eth0" Jan 20 01:43:00.891604 containerd[1767]: 2026-01-20 01:43:00.886 [INFO][5876] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" HandleID="k8s-pod-network.ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" Workload="ci--4081.3.6--n--0046389dc1-k8s-whisker--f5fc59cb9--tchkn-eth0" Jan 20 01:43:00.891604 containerd[1767]: 2026-01-20 01:43:00.887 [INFO][5876] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:00.891604 containerd[1767]: 2026-01-20 01:43:00.889 [INFO][5868] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0" Jan 20 01:43:00.893043 containerd[1767]: time="2026-01-20T01:43:00.891547920Z" level=info msg="TearDown network for sandbox \"ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0\" successfully" Jan 20 01:43:00.898979 containerd[1767]: time="2026-01-20T01:43:00.898924559Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:43:00.899051 containerd[1767]: time="2026-01-20T01:43:00.898997279Z" level=info msg="RemovePodSandbox \"ccd31514342bcee3db6189afb870d1e64f991ad8a699ef5e58f3ebaa33ffdac0\" returns successfully" Jan 20 01:43:00.899568 containerd[1767]: time="2026-01-20T01:43:00.899544439Z" level=info msg="StopPodSandbox for \"86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570\"" Jan 20 01:43:00.965587 containerd[1767]: 2026-01-20 01:43:00.932 [WARNING][5890] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--k8n9d-eth0", GenerateName:"calico-apiserver-9899c86f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"106a00f0-806f-4139-9f3e-5722fa42f199", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9899c86f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0046389dc1", ContainerID:"62ce0b959fd61da0435635cc90e3cf10544d169d51552080378e40df2bbd7c53", Pod:"calico-apiserver-9899c86f9-k8n9d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali71f59f980e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:00.965587 containerd[1767]: 2026-01-20 01:43:00.932 [INFO][5890] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" Jan 20 01:43:00.965587 containerd[1767]: 2026-01-20 01:43:00.932 [INFO][5890] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" iface="eth0" netns="" Jan 20 01:43:00.965587 containerd[1767]: 2026-01-20 01:43:00.932 [INFO][5890] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" Jan 20 01:43:00.965587 containerd[1767]: 2026-01-20 01:43:00.932 [INFO][5890] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" Jan 20 01:43:00.965587 containerd[1767]: 2026-01-20 01:43:00.952 [INFO][5897] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" HandleID="k8s-pod-network.86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--k8n9d-eth0" Jan 20 01:43:00.965587 containerd[1767]: 2026-01-20 01:43:00.952 [INFO][5897] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:00.965587 containerd[1767]: 2026-01-20 01:43:00.952 [INFO][5897] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:00.965587 containerd[1767]: 2026-01-20 01:43:00.960 [WARNING][5897] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" HandleID="k8s-pod-network.86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--k8n9d-eth0" Jan 20 01:43:00.965587 containerd[1767]: 2026-01-20 01:43:00.960 [INFO][5897] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" HandleID="k8s-pod-network.86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--k8n9d-eth0" Jan 20 01:43:00.965587 containerd[1767]: 2026-01-20 01:43:00.962 [INFO][5897] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:00.965587 containerd[1767]: 2026-01-20 01:43:00.963 [INFO][5890] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" Jan 20 01:43:00.965587 containerd[1767]: time="2026-01-20T01:43:00.965462468Z" level=info msg="TearDown network for sandbox \"86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570\" successfully" Jan 20 01:43:00.965587 containerd[1767]: time="2026-01-20T01:43:00.965491388Z" level=info msg="StopPodSandbox for \"86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570\" returns successfully" Jan 20 01:43:00.966087 containerd[1767]: time="2026-01-20T01:43:00.965949548Z" level=info msg="RemovePodSandbox for \"86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570\"" Jan 20 01:43:00.966087 containerd[1767]: time="2026-01-20T01:43:00.965979948Z" level=info msg="Forcibly stopping sandbox \"86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570\"" Jan 20 01:43:01.038339 containerd[1767]: 2026-01-20 01:43:01.002 [WARNING][5911] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--k8n9d-eth0", GenerateName:"calico-apiserver-9899c86f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"106a00f0-806f-4139-9f3e-5722fa42f199", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9899c86f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0046389dc1", ContainerID:"62ce0b959fd61da0435635cc90e3cf10544d169d51552080378e40df2bbd7c53", Pod:"calico-apiserver-9899c86f9-k8n9d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali71f59f980e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:01.038339 containerd[1767]: 2026-01-20 01:43:01.002 [INFO][5911] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" Jan 20 01:43:01.038339 containerd[1767]: 2026-01-20 01:43:01.002 [INFO][5911] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" iface="eth0" netns="" Jan 20 01:43:01.038339 containerd[1767]: 2026-01-20 01:43:01.002 [INFO][5911] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" Jan 20 01:43:01.038339 containerd[1767]: 2026-01-20 01:43:01.002 [INFO][5911] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" Jan 20 01:43:01.038339 containerd[1767]: 2026-01-20 01:43:01.021 [INFO][5918] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" HandleID="k8s-pod-network.86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--k8n9d-eth0" Jan 20 01:43:01.038339 containerd[1767]: 2026-01-20 01:43:01.021 [INFO][5918] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:01.038339 containerd[1767]: 2026-01-20 01:43:01.021 [INFO][5918] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:01.038339 containerd[1767]: 2026-01-20 01:43:01.029 [WARNING][5918] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" HandleID="k8s-pod-network.86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--k8n9d-eth0" Jan 20 01:43:01.038339 containerd[1767]: 2026-01-20 01:43:01.029 [INFO][5918] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" HandleID="k8s-pod-network.86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--k8n9d-eth0" Jan 20 01:43:01.038339 containerd[1767]: 2026-01-20 01:43:01.031 [INFO][5918] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:01.038339 containerd[1767]: 2026-01-20 01:43:01.033 [INFO][5911] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570" Jan 20 01:43:01.040468 containerd[1767]: time="2026-01-20T01:43:01.038743175Z" level=info msg="TearDown network for sandbox \"86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570\" successfully" Jan 20 01:43:01.078682 containerd[1767]: time="2026-01-20T01:43:01.078625209Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:43:01.078914 containerd[1767]: time="2026-01-20T01:43:01.078710769Z" level=info msg="RemovePodSandbox \"86bf9b1d2b140c02aaf2eba48f5b90b63673442bf73a8bb1ca81d46123834570\" returns successfully" Jan 20 01:43:01.079539 containerd[1767]: time="2026-01-20T01:43:01.079513369Z" level=info msg="StopPodSandbox for \"afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504\"" Jan 20 01:43:01.151645 containerd[1767]: 2026-01-20 01:43:01.111 [WARNING][5932] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0046389dc1-k8s-goldmane--666569f655--bwmf9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"8a0da74e-5da6-4d17-baef-898cc44d92e7", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0046389dc1", ContainerID:"6efab69d4106f47057bb042750913f99919af88e074c28df4bd98a7f101b8444", Pod:"goldmane-666569f655-bwmf9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.28.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali22174637fbe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:01.151645 containerd[1767]: 2026-01-20 01:43:01.111 [INFO][5932] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" Jan 20 01:43:01.151645 containerd[1767]: 2026-01-20 01:43:01.111 [INFO][5932] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" iface="eth0" netns="" Jan 20 01:43:01.151645 containerd[1767]: 2026-01-20 01:43:01.111 [INFO][5932] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" Jan 20 01:43:01.151645 containerd[1767]: 2026-01-20 01:43:01.111 [INFO][5932] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" Jan 20 01:43:01.151645 containerd[1767]: 2026-01-20 01:43:01.132 [INFO][5939] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" HandleID="k8s-pod-network.afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" Workload="ci--4081.3.6--n--0046389dc1-k8s-goldmane--666569f655--bwmf9-eth0" Jan 20 01:43:01.151645 containerd[1767]: 2026-01-20 01:43:01.132 [INFO][5939] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:01.151645 containerd[1767]: 2026-01-20 01:43:01.132 [INFO][5939] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:01.151645 containerd[1767]: 2026-01-20 01:43:01.145 [WARNING][5939] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" HandleID="k8s-pod-network.afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" Workload="ci--4081.3.6--n--0046389dc1-k8s-goldmane--666569f655--bwmf9-eth0" Jan 20 01:43:01.151645 containerd[1767]: 2026-01-20 01:43:01.145 [INFO][5939] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" HandleID="k8s-pod-network.afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" Workload="ci--4081.3.6--n--0046389dc1-k8s-goldmane--666569f655--bwmf9-eth0" Jan 20 01:43:01.151645 containerd[1767]: 2026-01-20 01:43:01.147 [INFO][5939] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:01.151645 containerd[1767]: 2026-01-20 01:43:01.149 [INFO][5932] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" Jan 20 01:43:01.153762 containerd[1767]: time="2026-01-20T01:43:01.152992636Z" level=info msg="TearDown network for sandbox \"afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504\" successfully" Jan 20 01:43:01.153762 containerd[1767]: time="2026-01-20T01:43:01.153022116Z" level=info msg="StopPodSandbox for \"afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504\" returns successfully" Jan 20 01:43:01.154278 containerd[1767]: time="2026-01-20T01:43:01.154047796Z" level=info msg="RemovePodSandbox for \"afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504\"" Jan 20 01:43:01.154278 containerd[1767]: time="2026-01-20T01:43:01.154077076Z" level=info msg="Forcibly stopping sandbox \"afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504\"" Jan 20 01:43:01.252151 containerd[1767]: 2026-01-20 01:43:01.195 [WARNING][5953] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0046389dc1-k8s-goldmane--666569f655--bwmf9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"8a0da74e-5da6-4d17-baef-898cc44d92e7", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0046389dc1", ContainerID:"6efab69d4106f47057bb042750913f99919af88e074c28df4bd98a7f101b8444", Pod:"goldmane-666569f655-bwmf9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.28.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali22174637fbe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:01.252151 containerd[1767]: 2026-01-20 01:43:01.195 [INFO][5953] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" Jan 20 01:43:01.252151 containerd[1767]: 2026-01-20 01:43:01.195 [INFO][5953] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" iface="eth0" netns="" Jan 20 01:43:01.252151 containerd[1767]: 2026-01-20 01:43:01.196 [INFO][5953] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" Jan 20 01:43:01.252151 containerd[1767]: 2026-01-20 01:43:01.196 [INFO][5953] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" Jan 20 01:43:01.252151 containerd[1767]: 2026-01-20 01:43:01.231 [INFO][5960] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" HandleID="k8s-pod-network.afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" Workload="ci--4081.3.6--n--0046389dc1-k8s-goldmane--666569f655--bwmf9-eth0" Jan 20 01:43:01.252151 containerd[1767]: 2026-01-20 01:43:01.232 [INFO][5960] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:01.252151 containerd[1767]: 2026-01-20 01:43:01.232 [INFO][5960] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:01.252151 containerd[1767]: 2026-01-20 01:43:01.243 [WARNING][5960] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" HandleID="k8s-pod-network.afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" Workload="ci--4081.3.6--n--0046389dc1-k8s-goldmane--666569f655--bwmf9-eth0" Jan 20 01:43:01.252151 containerd[1767]: 2026-01-20 01:43:01.244 [INFO][5960] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" HandleID="k8s-pod-network.afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" Workload="ci--4081.3.6--n--0046389dc1-k8s-goldmane--666569f655--bwmf9-eth0" Jan 20 01:43:01.252151 containerd[1767]: 2026-01-20 01:43:01.245 [INFO][5960] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:01.252151 containerd[1767]: 2026-01-20 01:43:01.250 [INFO][5953] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504" Jan 20 01:43:01.252570 containerd[1767]: time="2026-01-20T01:43:01.252186220Z" level=info msg="TearDown network for sandbox \"afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504\" successfully" Jan 20 01:43:01.259830 containerd[1767]: time="2026-01-20T01:43:01.259793618Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:43:01.259961 containerd[1767]: time="2026-01-20T01:43:01.259850978Z" level=info msg="RemovePodSandbox \"afe7c4a5c524dd17362e513647edda7ef448d4a48ff54e2691b3862496845504\" returns successfully" Jan 20 01:43:01.261117 containerd[1767]: time="2026-01-20T01:43:01.260802298Z" level=info msg="StopPodSandbox for \"54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421\"" Jan 20 01:43:01.348435 containerd[1767]: 2026-01-20 01:43:01.310 [WARNING][5974] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--7d975bd6cf--mbc2x-eth0", GenerateName:"calico-apiserver-7d975bd6cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"0e8d57de-35ca-4ff1-828c-b0edcfa72a11", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d975bd6cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0046389dc1", ContainerID:"cb00aea0ff3fe069d2862ee9100a6369c9a398d71f74edaafeeae6e1ab02e196", Pod:"calico-apiserver-7d975bd6cf-mbc2x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali866d07bb205", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:01.348435 containerd[1767]: 2026-01-20 01:43:01.310 [INFO][5974] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" Jan 20 01:43:01.348435 containerd[1767]: 2026-01-20 01:43:01.310 [INFO][5974] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" iface="eth0" netns="" Jan 20 01:43:01.348435 containerd[1767]: 2026-01-20 01:43:01.310 [INFO][5974] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" Jan 20 01:43:01.348435 containerd[1767]: 2026-01-20 01:43:01.311 [INFO][5974] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" Jan 20 01:43:01.348435 containerd[1767]: 2026-01-20 01:43:01.331 [INFO][5981] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" HandleID="k8s-pod-network.54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--7d975bd6cf--mbc2x-eth0" Jan 20 01:43:01.348435 containerd[1767]: 2026-01-20 01:43:01.331 [INFO][5981] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:01.348435 containerd[1767]: 2026-01-20 01:43:01.331 [INFO][5981] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:01.348435 containerd[1767]: 2026-01-20 01:43:01.342 [WARNING][5981] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" HandleID="k8s-pod-network.54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--7d975bd6cf--mbc2x-eth0" Jan 20 01:43:01.348435 containerd[1767]: 2026-01-20 01:43:01.342 [INFO][5981] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" HandleID="k8s-pod-network.54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--7d975bd6cf--mbc2x-eth0" Jan 20 01:43:01.348435 containerd[1767]: 2026-01-20 01:43:01.344 [INFO][5981] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:01.348435 containerd[1767]: 2026-01-20 01:43:01.346 [INFO][5974] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" Jan 20 01:43:01.350082 containerd[1767]: time="2026-01-20T01:43:01.349153923Z" level=info msg="TearDown network for sandbox \"54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421\" successfully" Jan 20 01:43:01.350082 containerd[1767]: time="2026-01-20T01:43:01.349190723Z" level=info msg="StopPodSandbox for \"54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421\" returns successfully" Jan 20 01:43:01.350288 containerd[1767]: time="2026-01-20T01:43:01.350063003Z" level=info msg="RemovePodSandbox for \"54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421\"" Jan 20 01:43:01.350288 containerd[1767]: time="2026-01-20T01:43:01.350213163Z" level=info msg="Forcibly stopping sandbox \"54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421\"" Jan 20 01:43:01.435024 containerd[1767]: 2026-01-20 01:43:01.393 [WARNING][5995] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--7d975bd6cf--mbc2x-eth0", GenerateName:"calico-apiserver-7d975bd6cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"0e8d57de-35ca-4ff1-828c-b0edcfa72a11", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d975bd6cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0046389dc1", ContainerID:"cb00aea0ff3fe069d2862ee9100a6369c9a398d71f74edaafeeae6e1ab02e196", Pod:"calico-apiserver-7d975bd6cf-mbc2x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali866d07bb205", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:01.435024 containerd[1767]: 2026-01-20 01:43:01.394 [INFO][5995] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" Jan 20 01:43:01.435024 containerd[1767]: 2026-01-20 01:43:01.394 [INFO][5995] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" iface="eth0" netns="" Jan 20 01:43:01.435024 containerd[1767]: 2026-01-20 01:43:01.394 [INFO][5995] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" Jan 20 01:43:01.435024 containerd[1767]: 2026-01-20 01:43:01.394 [INFO][5995] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" Jan 20 01:43:01.435024 containerd[1767]: 2026-01-20 01:43:01.419 [INFO][6002] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" HandleID="k8s-pod-network.54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--7d975bd6cf--mbc2x-eth0" Jan 20 01:43:01.435024 containerd[1767]: 2026-01-20 01:43:01.419 [INFO][6002] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:01.435024 containerd[1767]: 2026-01-20 01:43:01.419 [INFO][6002] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:01.435024 containerd[1767]: 2026-01-20 01:43:01.429 [WARNING][6002] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" HandleID="k8s-pod-network.54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--7d975bd6cf--mbc2x-eth0" Jan 20 01:43:01.435024 containerd[1767]: 2026-01-20 01:43:01.429 [INFO][6002] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" HandleID="k8s-pod-network.54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--7d975bd6cf--mbc2x-eth0" Jan 20 01:43:01.435024 containerd[1767]: 2026-01-20 01:43:01.430 [INFO][6002] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:01.435024 containerd[1767]: 2026-01-20 01:43:01.432 [INFO][5995] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421" Jan 20 01:43:01.435024 containerd[1767]: time="2026-01-20T01:43:01.434109589Z" level=info msg="TearDown network for sandbox \"54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421\" successfully" Jan 20 01:43:01.441544 containerd[1767]: time="2026-01-20T01:43:01.441478628Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:43:01.441714 containerd[1767]: time="2026-01-20T01:43:01.441696388Z" level=info msg="RemovePodSandbox \"54ae6f3b59eb50e2ca93565463545ed232b1b6a22937d56448a4a72ffe791421\" returns successfully" Jan 20 01:43:01.443278 containerd[1767]: time="2026-01-20T01:43:01.443257908Z" level=info msg="StopPodSandbox for \"216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b\"" Jan 20 01:43:01.524225 containerd[1767]: 2026-01-20 01:43:01.487 [WARNING][6016] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--jj6z8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9033ed8a-4ce4-4c81-8671-cf1d75ad0bd7", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0046389dc1", ContainerID:"6353ea97c3906a3c047e5d648c643bab9d2ce52787b1daa390eeb05de60739d9", Pod:"coredns-668d6bf9bc-jj6z8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali228f47d438a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:01.524225 containerd[1767]: 2026-01-20 01:43:01.488 [INFO][6016] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" Jan 20 01:43:01.524225 containerd[1767]: 2026-01-20 01:43:01.488 [INFO][6016] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" iface="eth0" netns="" Jan 20 01:43:01.524225 containerd[1767]: 2026-01-20 01:43:01.488 [INFO][6016] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" Jan 20 01:43:01.524225 containerd[1767]: 2026-01-20 01:43:01.488 [INFO][6016] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" Jan 20 01:43:01.524225 containerd[1767]: 2026-01-20 01:43:01.507 [INFO][6023] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" HandleID="k8s-pod-network.216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" Workload="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--jj6z8-eth0" Jan 20 01:43:01.524225 containerd[1767]: 2026-01-20 01:43:01.508 [INFO][6023] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:01.524225 containerd[1767]: 2026-01-20 01:43:01.508 [INFO][6023] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:01.524225 containerd[1767]: 2026-01-20 01:43:01.517 [WARNING][6023] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" HandleID="k8s-pod-network.216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" Workload="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--jj6z8-eth0" Jan 20 01:43:01.524225 containerd[1767]: 2026-01-20 01:43:01.517 [INFO][6023] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" HandleID="k8s-pod-network.216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" Workload="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--jj6z8-eth0" Jan 20 01:43:01.524225 containerd[1767]: 2026-01-20 01:43:01.518 [INFO][6023] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:01.524225 containerd[1767]: 2026-01-20 01:43:01.522 [INFO][6016] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" Jan 20 01:43:01.525726 containerd[1767]: time="2026-01-20T01:43:01.525587934Z" level=info msg="TearDown network for sandbox \"216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b\" successfully" Jan 20 01:43:01.525726 containerd[1767]: time="2026-01-20T01:43:01.525629454Z" level=info msg="StopPodSandbox for \"216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b\" returns successfully" Jan 20 01:43:01.526915 containerd[1767]: time="2026-01-20T01:43:01.526031694Z" level=info msg="RemovePodSandbox for \"216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b\"" Jan 20 01:43:01.526915 containerd[1767]: time="2026-01-20T01:43:01.526060294Z" level=info msg="Forcibly stopping sandbox \"216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b\"" Jan 20 01:43:01.613990 containerd[1767]: 2026-01-20 01:43:01.570 [WARNING][6037] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--jj6z8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9033ed8a-4ce4-4c81-8671-cf1d75ad0bd7", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0046389dc1", ContainerID:"6353ea97c3906a3c047e5d648c643bab9d2ce52787b1daa390eeb05de60739d9", Pod:"coredns-668d6bf9bc-jj6z8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali228f47d438a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:01.613990 containerd[1767]: 2026-01-20 01:43:01.570 [INFO][6037] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" Jan 20 01:43:01.613990 containerd[1767]: 2026-01-20 01:43:01.570 [INFO][6037] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" iface="eth0" netns="" Jan 20 01:43:01.613990 containerd[1767]: 2026-01-20 01:43:01.570 [INFO][6037] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" Jan 20 01:43:01.613990 containerd[1767]: 2026-01-20 01:43:01.570 [INFO][6037] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" Jan 20 01:43:01.613990 containerd[1767]: 2026-01-20 01:43:01.595 [INFO][6044] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" HandleID="k8s-pod-network.216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" Workload="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--jj6z8-eth0" Jan 20 01:43:01.613990 containerd[1767]: 2026-01-20 01:43:01.595 [INFO][6044] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:01.613990 containerd[1767]: 2026-01-20 01:43:01.595 [INFO][6044] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:01.613990 containerd[1767]: 2026-01-20 01:43:01.604 [WARNING][6044] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" HandleID="k8s-pod-network.216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" Workload="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--jj6z8-eth0" Jan 20 01:43:01.613990 containerd[1767]: 2026-01-20 01:43:01.604 [INFO][6044] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" HandleID="k8s-pod-network.216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" Workload="ci--4081.3.6--n--0046389dc1-k8s-coredns--668d6bf9bc--jj6z8-eth0" Jan 20 01:43:01.613990 containerd[1767]: 2026-01-20 01:43:01.608 [INFO][6044] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:01.613990 containerd[1767]: 2026-01-20 01:43:01.610 [INFO][6037] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b" Jan 20 01:43:01.614607 containerd[1767]: time="2026-01-20T01:43:01.614028159Z" level=info msg="TearDown network for sandbox \"216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b\" successfully" Jan 20 01:43:01.634425 containerd[1767]: time="2026-01-20T01:43:01.634379796Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:43:01.634561 containerd[1767]: time="2026-01-20T01:43:01.634452796Z" level=info msg="RemovePodSandbox \"216ff5e1d7ecbc17524151ebc240b5e6ce0098b236512f09e8a804151a5b267b\" returns successfully" Jan 20 01:43:01.635007 containerd[1767]: time="2026-01-20T01:43:01.634956436Z" level=info msg="StopPodSandbox for \"c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f\"" Jan 20 01:43:01.747995 containerd[1767]: 2026-01-20 01:43:01.692 [WARNING][6058] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0046389dc1-k8s-csi--node--driver--n8z22-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e68b55a2-bd34-4f7b-b8c9-be9ad16a2026", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0046389dc1", ContainerID:"17351907d5202416b05165ccf4ee0d4af4e8515c9e53fd9a06b93c0d95f0e01a", Pod:"csi-node-driver-n8z22", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.28.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7c784e4199b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:01.747995 containerd[1767]: 2026-01-20 01:43:01.693 [INFO][6058] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" Jan 20 01:43:01.747995 containerd[1767]: 2026-01-20 01:43:01.693 [INFO][6058] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" iface="eth0" netns="" Jan 20 01:43:01.747995 containerd[1767]: 2026-01-20 01:43:01.693 [INFO][6058] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" Jan 20 01:43:01.747995 containerd[1767]: 2026-01-20 01:43:01.693 [INFO][6058] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" Jan 20 01:43:01.747995 containerd[1767]: 2026-01-20 01:43:01.727 [INFO][6065] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" HandleID="k8s-pod-network.c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" Workload="ci--4081.3.6--n--0046389dc1-k8s-csi--node--driver--n8z22-eth0" Jan 20 01:43:01.747995 containerd[1767]: 2026-01-20 01:43:01.728 [INFO][6065] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:01.747995 containerd[1767]: 2026-01-20 01:43:01.728 [INFO][6065] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:01.747995 containerd[1767]: 2026-01-20 01:43:01.741 [WARNING][6065] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" HandleID="k8s-pod-network.c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" Workload="ci--4081.3.6--n--0046389dc1-k8s-csi--node--driver--n8z22-eth0" Jan 20 01:43:01.747995 containerd[1767]: 2026-01-20 01:43:01.741 [INFO][6065] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" HandleID="k8s-pod-network.c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" Workload="ci--4081.3.6--n--0046389dc1-k8s-csi--node--driver--n8z22-eth0" Jan 20 01:43:01.747995 containerd[1767]: 2026-01-20 01:43:01.742 [INFO][6065] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:01.747995 containerd[1767]: 2026-01-20 01:43:01.745 [INFO][6058] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" Jan 20 01:43:01.748970 containerd[1767]: time="2026-01-20T01:43:01.748029937Z" level=info msg="TearDown network for sandbox \"c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f\" successfully" Jan 20 01:43:01.748970 containerd[1767]: time="2026-01-20T01:43:01.748053297Z" level=info msg="StopPodSandbox for \"c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f\" returns successfully" Jan 20 01:43:01.748970 containerd[1767]: time="2026-01-20T01:43:01.748606976Z" level=info msg="RemovePodSandbox for \"c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f\"" Jan 20 01:43:01.748970 containerd[1767]: time="2026-01-20T01:43:01.748637296Z" level=info msg="Forcibly stopping sandbox \"c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f\"" Jan 20 01:43:01.846941 containerd[1767]: 2026-01-20 01:43:01.803 [WARNING][6079] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0046389dc1-k8s-csi--node--driver--n8z22-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e68b55a2-bd34-4f7b-b8c9-be9ad16a2026", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0046389dc1", ContainerID:"17351907d5202416b05165ccf4ee0d4af4e8515c9e53fd9a06b93c0d95f0e01a", Pod:"csi-node-driver-n8z22", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.28.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7c784e4199b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:01.846941 containerd[1767]: 2026-01-20 01:43:01.803 [INFO][6079] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" Jan 20 01:43:01.846941 containerd[1767]: 2026-01-20 01:43:01.803 [INFO][6079] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" iface="eth0" netns="" Jan 20 01:43:01.846941 containerd[1767]: 2026-01-20 01:43:01.803 [INFO][6079] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" Jan 20 01:43:01.846941 containerd[1767]: 2026-01-20 01:43:01.803 [INFO][6079] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" Jan 20 01:43:01.846941 containerd[1767]: 2026-01-20 01:43:01.827 [INFO][6086] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" HandleID="k8s-pod-network.c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" Workload="ci--4081.3.6--n--0046389dc1-k8s-csi--node--driver--n8z22-eth0" Jan 20 01:43:01.846941 containerd[1767]: 2026-01-20 01:43:01.827 [INFO][6086] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:01.846941 containerd[1767]: 2026-01-20 01:43:01.827 [INFO][6086] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:01.846941 containerd[1767]: 2026-01-20 01:43:01.837 [WARNING][6086] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" HandleID="k8s-pod-network.c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" Workload="ci--4081.3.6--n--0046389dc1-k8s-csi--node--driver--n8z22-eth0" Jan 20 01:43:01.846941 containerd[1767]: 2026-01-20 01:43:01.837 [INFO][6086] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" HandleID="k8s-pod-network.c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" Workload="ci--4081.3.6--n--0046389dc1-k8s-csi--node--driver--n8z22-eth0" Jan 20 01:43:01.846941 containerd[1767]: 2026-01-20 01:43:01.838 [INFO][6086] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:01.846941 containerd[1767]: 2026-01-20 01:43:01.842 [INFO][6079] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f" Jan 20 01:43:01.846941 containerd[1767]: time="2026-01-20T01:43:01.846636520Z" level=info msg="TearDown network for sandbox \"c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f\" successfully" Jan 20 01:43:01.854141 containerd[1767]: time="2026-01-20T01:43:01.853679399Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:43:01.854141 containerd[1767]: time="2026-01-20T01:43:01.853749879Z" level=info msg="RemovePodSandbox \"c52cc20b53465dbf537c3c3d8982f8194c3e87c532db3c00a99203b5c84df17f\" returns successfully" Jan 20 01:43:01.854296 containerd[1767]: time="2026-01-20T01:43:01.854251079Z" level=info msg="StopPodSandbox for \"35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809\"" Jan 20 01:43:01.926472 containerd[1767]: 2026-01-20 01:43:01.893 [WARNING][6100] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0046389dc1-k8s-calico--kube--controllers--cd848bd58--mqt54-eth0", GenerateName:"calico-kube-controllers-cd848bd58-", Namespace:"calico-system", SelfLink:"", UID:"f24ea947-18f0-4003-bcc7-bb3d7376a6ba", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cd848bd58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0046389dc1", ContainerID:"e330e4f371ba640d5c2be1db60fba6c667eb2765b69503878b1c42936ea1d474", Pod:"calico-kube-controllers-cd848bd58-mqt54", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.28.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia2e5cb6de89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:01.926472 containerd[1767]: 2026-01-20 01:43:01.893 [INFO][6100] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" Jan 20 01:43:01.926472 containerd[1767]: 2026-01-20 01:43:01.893 [INFO][6100] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" iface="eth0" netns="" Jan 20 01:43:01.926472 containerd[1767]: 2026-01-20 01:43:01.893 [INFO][6100] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" Jan 20 01:43:01.926472 containerd[1767]: 2026-01-20 01:43:01.893 [INFO][6100] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" Jan 20 01:43:01.926472 containerd[1767]: 2026-01-20 01:43:01.912 [INFO][6107] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" HandleID="k8s-pod-network.35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--kube--controllers--cd848bd58--mqt54-eth0" Jan 20 01:43:01.926472 containerd[1767]: 2026-01-20 01:43:01.912 [INFO][6107] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:01.926472 containerd[1767]: 2026-01-20 01:43:01.912 [INFO][6107] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:01.926472 containerd[1767]: 2026-01-20 01:43:01.920 [WARNING][6107] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" HandleID="k8s-pod-network.35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--kube--controllers--cd848bd58--mqt54-eth0" Jan 20 01:43:01.926472 containerd[1767]: 2026-01-20 01:43:01.920 [INFO][6107] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" HandleID="k8s-pod-network.35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--kube--controllers--cd848bd58--mqt54-eth0" Jan 20 01:43:01.926472 containerd[1767]: 2026-01-20 01:43:01.921 [INFO][6107] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:01.926472 containerd[1767]: 2026-01-20 01:43:01.923 [INFO][6100] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" Jan 20 01:43:01.926472 containerd[1767]: time="2026-01-20T01:43:01.926082667Z" level=info msg="TearDown network for sandbox \"35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809\" successfully" Jan 20 01:43:01.926472 containerd[1767]: time="2026-01-20T01:43:01.926106867Z" level=info msg="StopPodSandbox for \"35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809\" returns successfully" Jan 20 01:43:01.928447 containerd[1767]: time="2026-01-20T01:43:01.926999067Z" level=info msg="RemovePodSandbox for \"35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809\"" Jan 20 01:43:01.928447 containerd[1767]: time="2026-01-20T01:43:01.927025787Z" level=info msg="Forcibly stopping sandbox \"35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809\"" Jan 20 01:43:02.040475 containerd[1767]: 2026-01-20 01:43:01.991 [WARNING][6122] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0046389dc1-k8s-calico--kube--controllers--cd848bd58--mqt54-eth0", GenerateName:"calico-kube-controllers-cd848bd58-", Namespace:"calico-system", SelfLink:"", UID:"f24ea947-18f0-4003-bcc7-bb3d7376a6ba", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cd848bd58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0046389dc1", ContainerID:"e330e4f371ba640d5c2be1db60fba6c667eb2765b69503878b1c42936ea1d474", Pod:"calico-kube-controllers-cd848bd58-mqt54", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.28.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia2e5cb6de89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:02.040475 containerd[1767]: 2026-01-20 01:43:01.991 [INFO][6122] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" Jan 20 01:43:02.040475 containerd[1767]: 2026-01-20 01:43:01.991 [INFO][6122] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" iface="eth0" netns="" Jan 20 01:43:02.040475 containerd[1767]: 2026-01-20 01:43:01.991 [INFO][6122] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" Jan 20 01:43:02.040475 containerd[1767]: 2026-01-20 01:43:01.992 [INFO][6122] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" Jan 20 01:43:02.040475 containerd[1767]: 2026-01-20 01:43:02.022 [INFO][6131] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" HandleID="k8s-pod-network.35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--kube--controllers--cd848bd58--mqt54-eth0" Jan 20 01:43:02.040475 containerd[1767]: 2026-01-20 01:43:02.022 [INFO][6131] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:02.040475 containerd[1767]: 2026-01-20 01:43:02.022 [INFO][6131] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:02.040475 containerd[1767]: 2026-01-20 01:43:02.034 [WARNING][6131] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" HandleID="k8s-pod-network.35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--kube--controllers--cd848bd58--mqt54-eth0" Jan 20 01:43:02.040475 containerd[1767]: 2026-01-20 01:43:02.034 [INFO][6131] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" HandleID="k8s-pod-network.35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--kube--controllers--cd848bd58--mqt54-eth0" Jan 20 01:43:02.040475 containerd[1767]: 2026-01-20 01:43:02.036 [INFO][6131] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:02.040475 containerd[1767]: 2026-01-20 01:43:02.038 [INFO][6122] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809" Jan 20 01:43:02.041262 containerd[1767]: time="2026-01-20T01:43:02.040952088Z" level=info msg="TearDown network for sandbox \"35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809\" successfully" Jan 20 01:43:02.051291 containerd[1767]: time="2026-01-20T01:43:02.051138406Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:43:02.051291 containerd[1767]: time="2026-01-20T01:43:02.051200566Z" level=info msg="RemovePodSandbox \"35261c204897aadef7f351361145ed6f09f86aa9ee0a8b469a96d2d0df4ca809\" returns successfully" Jan 20 01:43:02.052085 containerd[1767]: time="2026-01-20T01:43:02.051753886Z" level=info msg="StopPodSandbox for \"f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a\"" Jan 20 01:43:02.158651 containerd[1767]: 2026-01-20 01:43:02.100 [WARNING][6145] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--mvhh6-eth0", GenerateName:"calico-apiserver-9899c86f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"0e893c04-8b87-47f0-b2a8-981971a5bfc3", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9899c86f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0046389dc1", ContainerID:"0c2582bcbe2ee6f766a8ebb1d482a0edbfff8b08559ac0013ea6850bce2bf8d0", Pod:"calico-apiserver-9899c86f9-mvhh6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali59c695ce7f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:02.158651 containerd[1767]: 2026-01-20 01:43:02.101 [INFO][6145] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" Jan 20 01:43:02.158651 containerd[1767]: 2026-01-20 01:43:02.101 [INFO][6145] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" iface="eth0" netns="" Jan 20 01:43:02.158651 containerd[1767]: 2026-01-20 01:43:02.101 [INFO][6145] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" Jan 20 01:43:02.158651 containerd[1767]: 2026-01-20 01:43:02.101 [INFO][6145] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" Jan 20 01:43:02.158651 containerd[1767]: 2026-01-20 01:43:02.134 [INFO][6152] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" HandleID="k8s-pod-network.f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--mvhh6-eth0" Jan 20 01:43:02.158651 containerd[1767]: 2026-01-20 01:43:02.135 [INFO][6152] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:02.158651 containerd[1767]: 2026-01-20 01:43:02.135 [INFO][6152] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:02.158651 containerd[1767]: 2026-01-20 01:43:02.151 [WARNING][6152] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" HandleID="k8s-pod-network.f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--mvhh6-eth0" Jan 20 01:43:02.158651 containerd[1767]: 2026-01-20 01:43:02.151 [INFO][6152] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" HandleID="k8s-pod-network.f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--mvhh6-eth0" Jan 20 01:43:02.158651 containerd[1767]: 2026-01-20 01:43:02.152 [INFO][6152] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:02.158651 containerd[1767]: 2026-01-20 01:43:02.157 [INFO][6145] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" Jan 20 01:43:02.159092 containerd[1767]: time="2026-01-20T01:43:02.158691748Z" level=info msg="TearDown network for sandbox \"f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a\" successfully" Jan 20 01:43:02.159092 containerd[1767]: time="2026-01-20T01:43:02.158720108Z" level=info msg="StopPodSandbox for \"f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a\" returns successfully" Jan 20 01:43:02.160382 containerd[1767]: time="2026-01-20T01:43:02.160356588Z" level=info msg="RemovePodSandbox for \"f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a\"" Jan 20 01:43:02.160454 containerd[1767]: time="2026-01-20T01:43:02.160387028Z" level=info msg="Forcibly stopping sandbox \"f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a\"" Jan 20 01:43:02.271408 containerd[1767]: 2026-01-20 01:43:02.226 [WARNING][6166] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--mvhh6-eth0", GenerateName:"calico-apiserver-9899c86f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"0e893c04-8b87-47f0-b2a8-981971a5bfc3", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9899c86f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-0046389dc1", ContainerID:"0c2582bcbe2ee6f766a8ebb1d482a0edbfff8b08559ac0013ea6850bce2bf8d0", Pod:"calico-apiserver-9899c86f9-mvhh6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali59c695ce7f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:02.271408 containerd[1767]: 2026-01-20 01:43:02.226 [INFO][6166] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" Jan 20 01:43:02.271408 containerd[1767]: 2026-01-20 01:43:02.226 [INFO][6166] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" iface="eth0" netns="" Jan 20 01:43:02.271408 containerd[1767]: 2026-01-20 01:43:02.226 [INFO][6166] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" Jan 20 01:43:02.271408 containerd[1767]: 2026-01-20 01:43:02.226 [INFO][6166] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" Jan 20 01:43:02.271408 containerd[1767]: 2026-01-20 01:43:02.250 [INFO][6173] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" HandleID="k8s-pod-network.f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--mvhh6-eth0" Jan 20 01:43:02.271408 containerd[1767]: 2026-01-20 01:43:02.250 [INFO][6173] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:02.271408 containerd[1767]: 2026-01-20 01:43:02.250 [INFO][6173] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:02.271408 containerd[1767]: 2026-01-20 01:43:02.266 [WARNING][6173] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" HandleID="k8s-pod-network.f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--mvhh6-eth0" Jan 20 01:43:02.271408 containerd[1767]: 2026-01-20 01:43:02.266 [INFO][6173] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" HandleID="k8s-pod-network.f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" Workload="ci--4081.3.6--n--0046389dc1-k8s-calico--apiserver--9899c86f9--mvhh6-eth0" Jan 20 01:43:02.271408 containerd[1767]: 2026-01-20 01:43:02.267 [INFO][6173] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:02.271408 containerd[1767]: 2026-01-20 01:43:02.269 [INFO][6166] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a" Jan 20 01:43:02.271849 containerd[1767]: time="2026-01-20T01:43:02.271475089Z" level=info msg="TearDown network for sandbox \"f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a\" successfully" Jan 20 01:43:02.298570 containerd[1767]: time="2026-01-20T01:43:02.298439604Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:43:02.299991 containerd[1767]: time="2026-01-20T01:43:02.298546164Z" level=info msg="RemovePodSandbox \"f3594754fda9360fc92e656c8b0070bb3ac424ff35d2e4cc58c8a01f6d87701a\" returns successfully" Jan 20 01:43:02.574068 containerd[1767]: time="2026-01-20T01:43:02.572783718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 01:43:02.849489 containerd[1767]: time="2026-01-20T01:43:02.849373792Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:02.853192 containerd[1767]: time="2026-01-20T01:43:02.853093231Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 01:43:02.853192 containerd[1767]: time="2026-01-20T01:43:02.853149511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 01:43:02.853380 kubelet[3178]: E0120 01:43:02.853336 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:43:02.853637 kubelet[3178]: E0120 01:43:02.853396 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:43:02.854019 containerd[1767]: time="2026-01-20T01:43:02.853838631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 01:43:02.863057 kubelet[3178]: E0120 01:43:02.862993 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7n7lc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-cd848bd58-mqt54_calico-system(f24ea947-18f0-4003-bcc7-bb3d7376a6ba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:02.865914 kubelet[3178]: E0120 01:43:02.864173 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cd848bd58-mqt54" podUID="f24ea947-18f0-4003-bcc7-bb3d7376a6ba" Jan 20 01:43:03.135181 containerd[1767]: time="2026-01-20T01:43:03.134709744Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:03.139211 containerd[1767]: time="2026-01-20T01:43:03.139164304Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 01:43:03.139298 containerd[1767]: time="2026-01-20T01:43:03.139261464Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 01:43:03.139892 kubelet[3178]: E0120 01:43:03.139457 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:43:03.139892 kubelet[3178]: E0120 01:43:03.139505 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:43:03.139892 kubelet[3178]: E0120 01:43:03.139719 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7nmdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bwmf9_calico-system(8a0da74e-5da6-4d17-baef-898cc44d92e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:03.140777 containerd[1767]: time="2026-01-20T01:43:03.140336983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:43:03.141680 kubelet[3178]: E0120 01:43:03.141656 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bwmf9" podUID="8a0da74e-5da6-4d17-baef-898cc44d92e7" Jan 20 01:43:03.372985 containerd[1767]: time="2026-01-20T01:43:03.372939784Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:03.375497 containerd[1767]: time="2026-01-20T01:43:03.375343264Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:43:03.375497 containerd[1767]: time="2026-01-20T01:43:03.375434424Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:43:03.377910 kubelet[3178]: E0120 01:43:03.375999 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:43:03.377910 kubelet[3178]: E0120 01:43:03.376053 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:43:03.377910 kubelet[3178]: E0120 01:43:03.376170 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zh4m7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9899c86f9-mvhh6_calico-apiserver(0e893c04-8b87-47f0-b2a8-981971a5bfc3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:03.378149 kubelet[3178]: E0120 01:43:03.378114 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9899c86f9-mvhh6" podUID="0e893c04-8b87-47f0-b2a8-981971a5bfc3" Jan 20 01:43:05.570685 kubelet[3178]: E0120 01:43:05.570591 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bc8ccf7c-bn7qp" podUID="64bfc6ad-94c7-4fd0-8f5e-dd6a18f5f9f7" Jan 20 01:43:06.573037 containerd[1767]: time="2026-01-20T01:43:06.571359369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 01:43:06.840840 containerd[1767]: time="2026-01-20T01:43:06.838560277Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:06.843871 containerd[1767]: time="2026-01-20T01:43:06.843712596Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 01:43:06.843871 containerd[1767]: time="2026-01-20T01:43:06.843732956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 01:43:06.844327 kubelet[3178]: E0120 01:43:06.844077 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:43:06.844327 kubelet[3178]: E0120 01:43:06.844180 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:43:06.844327 kubelet[3178]: E0120 01:43:06.844289 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-65nqh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n8z22_calico-system(e68b55a2-bd34-4f7b-b8c9-be9ad16a2026): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:06.847560 containerd[1767]: time="2026-01-20T01:43:06.847314436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 01:43:07.109832 containerd[1767]: time="2026-01-20T01:43:07.109237423Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:07.114819 containerd[1767]: time="2026-01-20T01:43:07.114725022Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 01:43:07.114880 containerd[1767]: time="2026-01-20T01:43:07.114802982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 01:43:07.115060 kubelet[3178]: E0120 01:43:07.114980 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:43:07.115060 kubelet[3178]: E0120 01:43:07.115035 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:43:07.116339 kubelet[3178]: E0120 01:43:07.115221 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-65nqh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n8z22_calico-system(e68b55a2-bd34-4f7b-b8c9-be9ad16a2026): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:07.116478 kubelet[3178]: E0120 01:43:07.116344 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n8z22" podUID="e68b55a2-bd34-4f7b-b8c9-be9ad16a2026" Jan 20 01:43:07.570426 containerd[1767]: time="2026-01-20T01:43:07.570388571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:43:07.825549 containerd[1767]: time="2026-01-20T01:43:07.825320120Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:07.829170 containerd[1767]: time="2026-01-20T01:43:07.827636040Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:43:07.829170 containerd[1767]: time="2026-01-20T01:43:07.827723160Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:43:07.829316 kubelet[3178]: E0120 01:43:07.827840 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:43:07.829316 kubelet[3178]: E0120 01:43:07.827882 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:43:07.829316 kubelet[3178]: E0120 01:43:07.828007 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rlrwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9899c86f9-k8n9d_calico-apiserver(106a00f0-806f-4139-9f3e-5722fa42f199): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:07.829653 kubelet[3178]: E0120 01:43:07.829617 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9899c86f9-k8n9d" podUID="106a00f0-806f-4139-9f3e-5722fa42f199" Jan 20 01:43:08.584105 containerd[1767]: time="2026-01-20T01:43:08.584018609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:43:08.823392 containerd[1767]: time="2026-01-20T01:43:08.823328641Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:08.825482 containerd[1767]: time="2026-01-20T01:43:08.825449040Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:43:08.825551 containerd[1767]: time="2026-01-20T01:43:08.825531880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:43:08.825876 kubelet[3178]: E0120 01:43:08.825653 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:43:08.825876 kubelet[3178]: E0120 01:43:08.825702 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:43:08.825876 kubelet[3178]: E0120 01:43:08.825820 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-27gzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d975bd6cf-mbc2x_calico-apiserver(0e8d57de-35ca-4ff1-828c-b0edcfa72a11): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:08.827291 kubelet[3178]: E0120 01:43:08.827144 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d975bd6cf-mbc2x" podUID="0e8d57de-35ca-4ff1-828c-b0edcfa72a11" Jan 20 01:43:13.570577 kubelet[3178]: E0120 01:43:13.569992 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cd848bd58-mqt54" podUID="f24ea947-18f0-4003-bcc7-bb3d7376a6ba" Jan 20 01:43:15.570511 kubelet[3178]: E0120 01:43:15.570114 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9899c86f9-mvhh6" podUID="0e893c04-8b87-47f0-b2a8-981971a5bfc3" Jan 20 01:43:15.571121 kubelet[3178]: E0120 01:43:15.571069 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bwmf9" podUID="8a0da74e-5da6-4d17-baef-898cc44d92e7" Jan 20 01:43:17.571806 kubelet[3178]: E0120 01:43:17.571733 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n8z22" podUID="e68b55a2-bd34-4f7b-b8c9-be9ad16a2026" Jan 20 01:43:19.570168 kubelet[3178]: E0120 01:43:19.569484 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9899c86f9-k8n9d" podUID="106a00f0-806f-4139-9f3e-5722fa42f199" Jan 20 01:43:20.572380 containerd[1767]: time="2026-01-20T01:43:20.572145408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 01:43:20.802702 containerd[1767]: time="2026-01-20T01:43:20.802520441Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:20.805075 containerd[1767]: time="2026-01-20T01:43:20.804981280Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 01:43:20.805075 containerd[1767]: time="2026-01-20T01:43:20.805057600Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 01:43:20.805753 kubelet[3178]: E0120 01:43:20.805276 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:43:20.805753 kubelet[3178]: E0120 01:43:20.805321 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:43:20.805753 kubelet[3178]: E0120 01:43:20.805422 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:496be695959a4924850b56723f0d0926,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dwp7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-bc8ccf7c-bn7qp_calico-system(64bfc6ad-94c7-4fd0-8f5e-dd6a18f5f9f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:20.807551 containerd[1767]: time="2026-01-20T01:43:20.807519680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 01:43:21.099204 containerd[1767]: time="2026-01-20T01:43:21.099031700Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:21.110416 containerd[1767]: time="2026-01-20T01:43:21.110316817Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 01:43:21.110416 containerd[1767]: time="2026-01-20T01:43:21.110376777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 01:43:21.110575 kubelet[3178]: E0120 01:43:21.110504 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:43:21.110575 kubelet[3178]: E0120 01:43:21.110548 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:43:21.110691 kubelet[3178]: E0120 01:43:21.110652 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dwp7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-bc8ccf7c-bn7qp_calico-system(64bfc6ad-94c7-4fd0-8f5e-dd6a18f5f9f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:21.112044 kubelet[3178]: E0120 01:43:21.111990 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bc8ccf7c-bn7qp" podUID="64bfc6ad-94c7-4fd0-8f5e-dd6a18f5f9f7" Jan 20 01:43:21.570599 kubelet[3178]: E0120 01:43:21.570543 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d975bd6cf-mbc2x" podUID="0e8d57de-35ca-4ff1-828c-b0edcfa72a11" Jan 20 01:43:24.572346 containerd[1767]: time="2026-01-20T01:43:24.572096207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 01:43:24.850442 containerd[1767]: time="2026-01-20T01:43:24.849033988Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:24.851324 containerd[1767]: time="2026-01-20T01:43:24.851238307Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 01:43:24.851324 containerd[1767]: time="2026-01-20T01:43:24.851307067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 01:43:24.851495 kubelet[3178]: E0120 01:43:24.851437 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:43:24.851495 kubelet[3178]: E0120 01:43:24.851486 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:43:24.851834 kubelet[3178]: E0120 01:43:24.851604 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7n7lc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-cd848bd58-mqt54_calico-system(f24ea947-18f0-4003-bcc7-bb3d7376a6ba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:24.852796 kubelet[3178]: E0120 01:43:24.852739 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cd848bd58-mqt54" podUID="f24ea947-18f0-4003-bcc7-bb3d7376a6ba" Jan 20 01:43:28.571696 containerd[1767]: time="2026-01-20T01:43:28.571656628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:43:28.885393 containerd[1767]: time="2026-01-20T01:43:28.885269120Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:28.887336 containerd[1767]: time="2026-01-20T01:43:28.887293400Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:43:28.887501 containerd[1767]: time="2026-01-20T01:43:28.887375320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:43:28.887699 kubelet[3178]: E0120 01:43:28.887484 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:43:28.887699 kubelet[3178]: E0120 01:43:28.887523 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:43:28.887699 kubelet[3178]: E0120 01:43:28.887631 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zh4m7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9899c86f9-mvhh6_calico-apiserver(0e893c04-8b87-47f0-b2a8-981971a5bfc3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:28.889158 kubelet[3178]: E0120 01:43:28.888866 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9899c86f9-mvhh6" podUID="0e893c04-8b87-47f0-b2a8-981971a5bfc3" Jan 20 01:43:29.571409 containerd[1767]: time="2026-01-20T01:43:29.571191093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 01:43:29.831967 containerd[1767]: time="2026-01-20T01:43:29.831696357Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:29.839128 containerd[1767]: time="2026-01-20T01:43:29.839077556Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 01:43:29.839233 containerd[1767]: time="2026-01-20T01:43:29.839177756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 01:43:29.839418 kubelet[3178]: E0120 01:43:29.839377 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:43:29.839501 kubelet[3178]: E0120 01:43:29.839428 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:43:29.839598 kubelet[3178]: E0120 01:43:29.839552 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7nmdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bwmf9_calico-system(8a0da74e-5da6-4d17-baef-898cc44d92e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:29.841569 kubelet[3178]: E0120 01:43:29.841531 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bwmf9" podUID="8a0da74e-5da6-4d17-baef-898cc44d92e7" Jan 20 01:43:32.571373 containerd[1767]: time="2026-01-20T01:43:32.571133815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 01:43:32.812586 containerd[1767]: time="2026-01-20T01:43:32.812369689Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:32.815184 containerd[1767]: time="2026-01-20T01:43:32.815105929Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 01:43:32.815289 containerd[1767]: time="2026-01-20T01:43:32.815184209Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 01:43:32.815345 kubelet[3178]: E0120 01:43:32.815307 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:43:32.815629 kubelet[3178]: E0120 01:43:32.815357 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:43:32.815629 kubelet[3178]: E0120 01:43:32.815550 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-65nqh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n8z22_calico-system(e68b55a2-bd34-4f7b-b8c9-be9ad16a2026): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:32.816346 containerd[1767]: time="2026-01-20T01:43:32.816109849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:43:33.078890 containerd[1767]: time="2026-01-20T01:43:33.078787959Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:33.083768 containerd[1767]: time="2026-01-20T01:43:33.083661918Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:43:33.083768 containerd[1767]: time="2026-01-20T01:43:33.083734278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:43:33.083915 kubelet[3178]: E0120 01:43:33.083861 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:43:33.083981 kubelet[3178]: E0120 01:43:33.083920 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:43:33.084494 kubelet[3178]: E0120 01:43:33.084120 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rlrwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9899c86f9-k8n9d_calico-apiserver(106a00f0-806f-4139-9f3e-5722fa42f199): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:33.084597 containerd[1767]: time="2026-01-20T01:43:33.084199398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 01:43:33.085886 kubelet[3178]: E0120 01:43:33.085839 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9899c86f9-k8n9d" podUID="106a00f0-806f-4139-9f3e-5722fa42f199" Jan 20 01:43:33.354840 containerd[1767]: time="2026-01-20T01:43:33.354580626Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:33.357611 containerd[1767]: time="2026-01-20T01:43:33.356825306Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 01:43:33.357611 containerd[1767]: time="2026-01-20T01:43:33.356914586Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 01:43:33.357747 kubelet[3178]: E0120 01:43:33.357011 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:43:33.357747 kubelet[3178]: E0120 01:43:33.357052 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:43:33.357747 kubelet[3178]: E0120 01:43:33.357159 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-65nqh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n8z22_calico-system(e68b55a2-bd34-4f7b-b8c9-be9ad16a2026): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:33.359133 kubelet[3178]: E0120 01:43:33.358997 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n8z22" podUID="e68b55a2-bd34-4f7b-b8c9-be9ad16a2026" Jan 20 01:43:33.572909 kubelet[3178]: E0120 01:43:33.572821 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bc8ccf7c-bn7qp" podUID="64bfc6ad-94c7-4fd0-8f5e-dd6a18f5f9f7" Jan 20 01:43:35.569385 kubelet[3178]: E0120 01:43:35.569345 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cd848bd58-mqt54" podUID="f24ea947-18f0-4003-bcc7-bb3d7376a6ba" Jan 20 01:43:36.572132 containerd[1767]: time="2026-01-20T01:43:36.571877053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:43:36.837798 containerd[1767]: time="2026-01-20T01:43:36.837676363Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:36.840185 containerd[1767]: time="2026-01-20T01:43:36.840143242Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:43:36.840298 containerd[1767]: time="2026-01-20T01:43:36.840248922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:43:36.840488 kubelet[3178]: E0120 01:43:36.840439 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:43:36.840810 kubelet[3178]: E0120 01:43:36.840497 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:43:36.840810 kubelet[3178]: E0120 01:43:36.840610 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-27gzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d975bd6cf-mbc2x_calico-apiserver(0e8d57de-35ca-4ff1-828c-b0edcfa72a11): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:36.841824 kubelet[3178]: E0120 01:43:36.841774 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d975bd6cf-mbc2x" podUID="0e8d57de-35ca-4ff1-828c-b0edcfa72a11" Jan 20 01:43:40.570816 kubelet[3178]: E0120 01:43:40.570710 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9899c86f9-mvhh6" podUID="0e893c04-8b87-47f0-b2a8-981971a5bfc3" Jan 20 01:43:43.570600 kubelet[3178]: E0120 01:43:43.570531 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9899c86f9-k8n9d" podUID="106a00f0-806f-4139-9f3e-5722fa42f199" Jan 20 01:43:44.572685 kubelet[3178]: E0120 01:43:44.572326 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bwmf9" podUID="8a0da74e-5da6-4d17-baef-898cc44d92e7" Jan 20 01:43:44.574056 kubelet[3178]: E0120 01:43:44.573888 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n8z22" podUID="e68b55a2-bd34-4f7b-b8c9-be9ad16a2026" Jan 20 01:43:45.507832 systemd[1]: Started sshd@7-10.200.20.33:22-10.200.16.10:44564.service - OpenSSH per-connection server daemon (10.200.16.10:44564). Jan 20 01:43:45.968998 sshd[6232]: Accepted publickey for core from 10.200.16.10 port 44564 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:43:45.970590 sshd[6232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:43:45.977877 systemd-logind[1712]: New session 10 of user core. Jan 20 01:43:45.981076 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 01:43:46.377274 sshd[6232]: pam_unix(sshd:session): session closed for user core Jan 20 01:43:46.381693 systemd[1]: sshd@7-10.200.20.33:22-10.200.16.10:44564.service: Deactivated successfully. Jan 20 01:43:46.385823 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 01:43:46.386849 systemd-logind[1712]: Session 10 logged out. Waiting for processes to exit. Jan 20 01:43:46.388003 systemd-logind[1712]: Removed session 10. Jan 20 01:43:48.570721 kubelet[3178]: E0120 01:43:48.570675 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bc8ccf7c-bn7qp" podUID="64bfc6ad-94c7-4fd0-8f5e-dd6a18f5f9f7" Jan 20 01:43:49.570256 kubelet[3178]: E0120 01:43:49.570218 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cd848bd58-mqt54" podUID="f24ea947-18f0-4003-bcc7-bb3d7376a6ba" Jan 20 01:43:51.477985 systemd[1]: Started sshd@8-10.200.20.33:22-10.200.16.10:39050.service - OpenSSH per-connection server daemon (10.200.16.10:39050). Jan 20 01:43:51.570230 kubelet[3178]: E0120 01:43:51.570135 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d975bd6cf-mbc2x" podUID="0e8d57de-35ca-4ff1-828c-b0edcfa72a11" Jan 20 01:43:51.966700 sshd[6247]: Accepted publickey for core from 10.200.16.10 port 39050 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:43:51.968044 sshd[6247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:43:51.972519 systemd-logind[1712]: New session 11 of user core. Jan 20 01:43:51.979066 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 01:43:52.393441 sshd[6247]: pam_unix(sshd:session): session closed for user core Jan 20 01:43:52.396343 systemd[1]: sshd@8-10.200.20.33:22-10.200.16.10:39050.service: Deactivated successfully. Jan 20 01:43:52.397841 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 01:43:52.401766 systemd-logind[1712]: Session 11 logged out. Waiting for processes to exit. Jan 20 01:43:52.402929 systemd-logind[1712]: Removed session 11. Jan 20 01:43:55.569966 kubelet[3178]: E0120 01:43:55.569870 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9899c86f9-mvhh6" podUID="0e893c04-8b87-47f0-b2a8-981971a5bfc3" Jan 20 01:43:56.572866 kubelet[3178]: E0120 01:43:56.572222 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9899c86f9-k8n9d" podUID="106a00f0-806f-4139-9f3e-5722fa42f199" Jan 20 01:43:57.482175 systemd[1]: Started sshd@9-10.200.20.33:22-10.200.16.10:39062.service - OpenSSH per-connection server daemon (10.200.16.10:39062). Jan 20 01:43:57.570302 kubelet[3178]: E0120 01:43:57.570261 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bwmf9" podUID="8a0da74e-5da6-4d17-baef-898cc44d92e7" Jan 20 01:43:57.973684 sshd[6285]: Accepted publickey for core from 10.200.16.10 port 39062 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:43:57.975487 sshd[6285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:43:57.981577 systemd-logind[1712]: New session 12 of user core. Jan 20 01:43:57.986040 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 01:43:58.411473 sshd[6285]: pam_unix(sshd:session): session closed for user core Jan 20 01:43:58.415322 systemd[1]: sshd@9-10.200.20.33:22-10.200.16.10:39062.service: Deactivated successfully. Jan 20 01:43:58.418440 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 01:43:58.419790 systemd-logind[1712]: Session 12 logged out. Waiting for processes to exit. Jan 20 01:43:58.421514 systemd-logind[1712]: Removed session 12. Jan 20 01:43:58.502231 systemd[1]: Started sshd@10-10.200.20.33:22-10.200.16.10:39070.service - OpenSSH per-connection server daemon (10.200.16.10:39070). Jan 20 01:43:58.951835 sshd[6299]: Accepted publickey for core from 10.200.16.10 port 39070 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:43:58.953399 sshd[6299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:43:58.960295 systemd-logind[1712]: New session 13 of user core. Jan 20 01:43:58.964038 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 01:43:59.396269 sshd[6299]: pam_unix(sshd:session): session closed for user core Jan 20 01:43:59.401601 systemd-logind[1712]: Session 13 logged out. Waiting for processes to exit. Jan 20 01:43:59.402004 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 01:43:59.404115 systemd[1]: sshd@10-10.200.20.33:22-10.200.16.10:39070.service: Deactivated successfully. Jan 20 01:43:59.481322 systemd[1]: Started sshd@11-10.200.20.33:22-10.200.16.10:39074.service - OpenSSH per-connection server daemon (10.200.16.10:39074). Jan 20 01:43:59.571071 kubelet[3178]: E0120 01:43:59.571034 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n8z22" podUID="e68b55a2-bd34-4f7b-b8c9-be9ad16a2026" Jan 20 01:43:59.929530 sshd[6310]: Accepted publickey for core from 10.200.16.10 port 39074 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:43:59.930986 sshd[6310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:43:59.934767 systemd-logind[1712]: New session 14 of user core. Jan 20 01:43:59.943102 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 01:44:00.328867 sshd[6310]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:00.334792 systemd[1]: sshd@11-10.200.20.33:22-10.200.16.10:39074.service: Deactivated successfully. Jan 20 01:44:00.336369 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 01:44:00.341156 systemd-logind[1712]: Session 14 logged out. Waiting for processes to exit. Jan 20 01:44:00.342457 systemd-logind[1712]: Removed session 14. Jan 20 01:44:00.573533 kubelet[3178]: E0120 01:44:00.573218 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cd848bd58-mqt54" podUID="f24ea947-18f0-4003-bcc7-bb3d7376a6ba" Jan 20 01:44:02.572258 kubelet[3178]: E0120 01:44:02.571878 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d975bd6cf-mbc2x" podUID="0e8d57de-35ca-4ff1-828c-b0edcfa72a11" Jan 20 01:44:03.571148 containerd[1767]: time="2026-01-20T01:44:03.571102522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 01:44:03.831491 containerd[1767]: time="2026-01-20T01:44:03.831151310Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:44:03.833462 containerd[1767]: time="2026-01-20T01:44:03.833413910Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 01:44:03.833564 containerd[1767]: time="2026-01-20T01:44:03.833511270Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 01:44:03.833741 kubelet[3178]: E0120 01:44:03.833671 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:44:03.833741 kubelet[3178]: E0120 01:44:03.833732 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:44:03.834197 kubelet[3178]: E0120 01:44:03.833845 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:496be695959a4924850b56723f0d0926,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dwp7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-bc8ccf7c-bn7qp_calico-system(64bfc6ad-94c7-4fd0-8f5e-dd6a18f5f9f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 01:44:03.835837 containerd[1767]: time="2026-01-20T01:44:03.835599669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 01:44:04.079830 containerd[1767]: time="2026-01-20T01:44:04.079771420Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:44:04.082835 containerd[1767]: time="2026-01-20T01:44:04.082289300Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 01:44:04.082835 containerd[1767]: time="2026-01-20T01:44:04.082368700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 01:44:04.082979 kubelet[3178]: E0120 01:44:04.082531 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:44:04.082979 kubelet[3178]: E0120 01:44:04.082577 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:44:04.082979 kubelet[3178]: E0120 01:44:04.082682 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dwp7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-bc8ccf7c-bn7qp_calico-system(64bfc6ad-94c7-4fd0-8f5e-dd6a18f5f9f7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 01:44:04.084006 kubelet[3178]: E0120 01:44:04.083810 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bc8ccf7c-bn7qp" podUID="64bfc6ad-94c7-4fd0-8f5e-dd6a18f5f9f7" Jan 20 01:44:05.417161 systemd[1]: Started sshd@12-10.200.20.33:22-10.200.16.10:42520.service - OpenSSH per-connection server daemon (10.200.16.10:42520). Jan 20 01:44:05.871937 sshd[6329]: Accepted publickey for core from 10.200.16.10 port 42520 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:44:05.875289 sshd[6329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:05.880848 systemd-logind[1712]: New session 15 of user core. Jan 20 01:44:05.886035 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 01:44:06.271540 sshd[6329]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:06.274873 systemd[1]: sshd@12-10.200.20.33:22-10.200.16.10:42520.service: Deactivated successfully. Jan 20 01:44:06.277374 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 01:44:06.279065 systemd-logind[1712]: Session 15 logged out. Waiting for processes to exit. Jan 20 01:44:06.280448 systemd-logind[1712]: Removed session 15. Jan 20 01:44:07.570867 kubelet[3178]: E0120 01:44:07.570810 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9899c86f9-k8n9d" podUID="106a00f0-806f-4139-9f3e-5722fa42f199" Jan 20 01:44:10.572790 containerd[1767]: time="2026-01-20T01:44:10.572697236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:44:10.858467 containerd[1767]: time="2026-01-20T01:44:10.858330183Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:44:10.860685 containerd[1767]: time="2026-01-20T01:44:10.860648542Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:44:10.860807 containerd[1767]: time="2026-01-20T01:44:10.860730102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:44:10.860877 kubelet[3178]: E0120 01:44:10.860835 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:44:10.861161 kubelet[3178]: E0120 01:44:10.860886 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:44:10.861161 kubelet[3178]: E0120 01:44:10.861099 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zh4m7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9899c86f9-mvhh6_calico-apiserver(0e893c04-8b87-47f0-b2a8-981971a5bfc3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:44:10.861758 containerd[1767]: time="2026-01-20T01:44:10.861701782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 01:44:10.863077 kubelet[3178]: E0120 01:44:10.863038 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9899c86f9-mvhh6" podUID="0e893c04-8b87-47f0-b2a8-981971a5bfc3" Jan 20 01:44:11.088786 containerd[1767]: time="2026-01-20T01:44:11.088664944Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:44:11.092354 containerd[1767]: time="2026-01-20T01:44:11.092209264Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 01:44:11.092354 containerd[1767]: time="2026-01-20T01:44:11.092267183Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 01:44:11.092457 kubelet[3178]: E0120 01:44:11.092391 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:44:11.092457 kubelet[3178]: E0120 01:44:11.092436 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:44:11.092614 kubelet[3178]: E0120 01:44:11.092548 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7nmdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bwmf9_calico-system(8a0da74e-5da6-4d17-baef-898cc44d92e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 01:44:11.094672 kubelet[3178]: E0120 01:44:11.094623 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bwmf9" podUID="8a0da74e-5da6-4d17-baef-898cc44d92e7" Jan 20 01:44:11.377128 systemd[1]: Started sshd@13-10.200.20.33:22-10.200.16.10:57548.service - OpenSSH per-connection server daemon (10.200.16.10:57548). Jan 20 01:44:11.868147 sshd[6350]: Accepted publickey for core from 10.200.16.10 port 57548 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:44:11.869853 sshd[6350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:11.874516 systemd-logind[1712]: New session 16 of user core. Jan 20 01:44:11.879414 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 01:44:12.299106 sshd[6350]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:12.302656 systemd[1]: sshd@13-10.200.20.33:22-10.200.16.10:57548.service: Deactivated successfully. Jan 20 01:44:12.305145 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 01:44:12.307449 systemd-logind[1712]: Session 16 logged out. Waiting for processes to exit. Jan 20 01:44:12.309275 systemd-logind[1712]: Removed session 16. Jan 20 01:44:12.570283 containerd[1767]: time="2026-01-20T01:44:12.570167456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 01:44:12.839003 containerd[1767]: time="2026-01-20T01:44:12.838310251Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:44:12.840728 containerd[1767]: time="2026-01-20T01:44:12.840637610Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 01:44:12.840728 containerd[1767]: time="2026-01-20T01:44:12.840715170Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 01:44:12.840875 kubelet[3178]: E0120 01:44:12.840828 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:44:12.842168 kubelet[3178]: E0120 01:44:12.840877 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:44:12.842168 kubelet[3178]: E0120 01:44:12.840995 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7n7lc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-cd848bd58-mqt54_calico-system(f24ea947-18f0-4003-bcc7-bb3d7376a6ba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 01:44:12.842391 kubelet[3178]: E0120 01:44:12.842359 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cd848bd58-mqt54" podUID="f24ea947-18f0-4003-bcc7-bb3d7376a6ba" Jan 20 01:44:14.572310 kubelet[3178]: E0120 01:44:14.572268 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d975bd6cf-mbc2x" podUID="0e8d57de-35ca-4ff1-828c-b0edcfa72a11" Jan 20 01:44:14.573042 containerd[1767]: time="2026-01-20T01:44:14.572714040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 01:44:14.832531 containerd[1767]: time="2026-01-20T01:44:14.832400237Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:44:14.835660 containerd[1767]: time="2026-01-20T01:44:14.835547476Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 01:44:14.835660 containerd[1767]: time="2026-01-20T01:44:14.835610636Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 01:44:14.836120 kubelet[3178]: E0120 01:44:14.835796 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:44:14.836120 kubelet[3178]: E0120 01:44:14.835855 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:44:14.836120 kubelet[3178]: E0120 01:44:14.835976 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-65nqh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n8z22_calico-system(e68b55a2-bd34-4f7b-b8c9-be9ad16a2026): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 01:44:14.841057 containerd[1767]: time="2026-01-20T01:44:14.841029515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 01:44:15.067492 containerd[1767]: time="2026-01-20T01:44:15.067439197Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:44:15.070240 containerd[1767]: time="2026-01-20T01:44:15.070201437Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 01:44:15.070327 containerd[1767]: time="2026-01-20T01:44:15.070293837Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 01:44:15.070488 kubelet[3178]: E0120 01:44:15.070448 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:44:15.070547 kubelet[3178]: E0120 01:44:15.070506 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:44:15.071038 kubelet[3178]: E0120 01:44:15.070615 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-65nqh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n8z22_calico-system(e68b55a2-bd34-4f7b-b8c9-be9ad16a2026): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 01:44:15.072002 kubelet[3178]: E0120 01:44:15.071972 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n8z22" podUID="e68b55a2-bd34-4f7b-b8c9-be9ad16a2026" Jan 20 01:44:17.384397 systemd[1]: Started sshd@14-10.200.20.33:22-10.200.16.10:57560.service - OpenSSH per-connection server daemon (10.200.16.10:57560). Jan 20 01:44:17.571786 kubelet[3178]: E0120 01:44:17.571743 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bc8ccf7c-bn7qp" podUID="64bfc6ad-94c7-4fd0-8f5e-dd6a18f5f9f7" Jan 20 01:44:17.881230 sshd[6366]: Accepted publickey for core from 10.200.16.10 port 57560 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:44:17.882630 sshd[6366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:17.886992 systemd-logind[1712]: New session 17 of user core. Jan 20 01:44:17.890045 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 01:44:18.301136 sshd[6366]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:18.306076 systemd-logind[1712]: Session 17 logged out. Waiting for processes to exit. Jan 20 01:44:18.307222 systemd[1]: sshd@14-10.200.20.33:22-10.200.16.10:57560.service: Deactivated successfully. Jan 20 01:44:18.309954 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 01:44:18.311956 systemd-logind[1712]: Removed session 17. Jan 20 01:44:18.389267 systemd[1]: Started sshd@15-10.200.20.33:22-10.200.16.10:57568.service - OpenSSH per-connection server daemon (10.200.16.10:57568). Jan 20 01:44:18.854970 sshd[6379]: Accepted publickey for core from 10.200.16.10 port 57568 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:44:18.856361 sshd[6379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:18.859923 systemd-logind[1712]: New session 18 of user core. Jan 20 01:44:18.864079 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 01:44:19.391474 sshd[6379]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:19.396690 systemd[1]: sshd@15-10.200.20.33:22-10.200.16.10:57568.service: Deactivated successfully. Jan 20 01:44:19.400159 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 01:44:19.402257 systemd-logind[1712]: Session 18 logged out. Waiting for processes to exit. Jan 20 01:44:19.404045 systemd-logind[1712]: Removed session 18. Jan 20 01:44:19.468984 systemd[1]: Started sshd@16-10.200.20.33:22-10.200.16.10:57584.service - OpenSSH per-connection server daemon (10.200.16.10:57584). Jan 20 01:44:19.962415 sshd[6390]: Accepted publickey for core from 10.200.16.10 port 57584 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:44:19.964450 sshd[6390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:19.968642 systemd-logind[1712]: New session 19 of user core. Jan 20 01:44:19.975048 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 01:44:20.574417 containerd[1767]: time="2026-01-20T01:44:20.574210270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:44:20.886860 containerd[1767]: time="2026-01-20T01:44:20.886029810Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:44:20.889095 containerd[1767]: time="2026-01-20T01:44:20.888911569Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:44:20.889210 containerd[1767]: time="2026-01-20T01:44:20.889015569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:44:20.889832 kubelet[3178]: E0120 01:44:20.889240 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:44:20.889832 kubelet[3178]: E0120 01:44:20.889333 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:44:20.889832 kubelet[3178]: E0120 01:44:20.889469 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rlrwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9899c86f9-k8n9d_calico-apiserver(106a00f0-806f-4139-9f3e-5722fa42f199): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:44:20.890684 kubelet[3178]: E0120 01:44:20.890649 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9899c86f9-k8n9d" podUID="106a00f0-806f-4139-9f3e-5722fa42f199" Jan 20 01:44:21.052013 sshd[6390]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:21.058358 systemd-logind[1712]: Session 19 logged out. Waiting for processes to exit. Jan 20 01:44:21.061312 systemd[1]: sshd@16-10.200.20.33:22-10.200.16.10:57584.service: Deactivated successfully. Jan 20 01:44:21.064101 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 01:44:21.066308 systemd-logind[1712]: Removed session 19. Jan 20 01:44:21.139623 systemd[1]: Started sshd@17-10.200.20.33:22-10.200.16.10:37372.service - OpenSSH per-connection server daemon (10.200.16.10:37372). Jan 20 01:44:21.570290 kubelet[3178]: E0120 01:44:21.569984 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bwmf9" podUID="8a0da74e-5da6-4d17-baef-898cc44d92e7" Jan 20 01:44:21.646154 sshd[6419]: Accepted publickey for core from 10.200.16.10 port 37372 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:44:21.647667 sshd[6419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:21.652749 systemd-logind[1712]: New session 20 of user core. Jan 20 01:44:21.655063 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 01:44:22.216059 sshd[6419]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:22.220378 systemd[1]: sshd@17-10.200.20.33:22-10.200.16.10:37372.service: Deactivated successfully. Jan 20 01:44:22.223225 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 01:44:22.224039 systemd-logind[1712]: Session 20 logged out. Waiting for processes to exit. Jan 20 01:44:22.226284 systemd-logind[1712]: Removed session 20. Jan 20 01:44:22.302614 systemd[1]: Started sshd@18-10.200.20.33:22-10.200.16.10:37388.service - OpenSSH per-connection server daemon (10.200.16.10:37388). Jan 20 01:44:22.799935 sshd[6430]: Accepted publickey for core from 10.200.16.10 port 37388 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:44:22.802426 sshd[6430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:22.810429 systemd-logind[1712]: New session 21 of user core. Jan 20 01:44:22.816050 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 01:44:23.218998 sshd[6430]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:23.222794 systemd[1]: sshd@18-10.200.20.33:22-10.200.16.10:37388.service: Deactivated successfully. Jan 20 01:44:23.226288 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 01:44:23.227956 systemd-logind[1712]: Session 21 logged out. Waiting for processes to exit. Jan 20 01:44:23.230957 systemd-logind[1712]: Removed session 21. Jan 20 01:44:23.570661 kubelet[3178]: E0120 01:44:23.570333 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9899c86f9-mvhh6" podUID="0e893c04-8b87-47f0-b2a8-981971a5bfc3" Jan 20 01:44:26.573934 containerd[1767]: time="2026-01-20T01:44:26.573371079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:44:26.575163 kubelet[3178]: E0120 01:44:26.575113 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cd848bd58-mqt54" podUID="f24ea947-18f0-4003-bcc7-bb3d7376a6ba" Jan 20 01:44:26.828492 containerd[1767]: time="2026-01-20T01:44:26.828345110Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:44:26.831198 containerd[1767]: time="2026-01-20T01:44:26.831161269Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:44:26.831276 containerd[1767]: time="2026-01-20T01:44:26.831255309Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:44:26.831413 kubelet[3178]: E0120 01:44:26.831377 3178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:44:26.831472 kubelet[3178]: E0120 01:44:26.831426 3178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:44:26.831592 kubelet[3178]: E0120 01:44:26.831548 3178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-27gzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d975bd6cf-mbc2x_calico-apiserver(0e8d57de-35ca-4ff1-828c-b0edcfa72a11): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:44:26.833166 kubelet[3178]: E0120 01:44:26.833073 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d975bd6cf-mbc2x" podUID="0e8d57de-35ca-4ff1-828c-b0edcfa72a11" Jan 20 01:44:28.306842 systemd[1]: Started sshd@19-10.200.20.33:22-10.200.16.10:37392.service - OpenSSH per-connection server daemon (10.200.16.10:37392). Jan 20 01:44:28.574142 kubelet[3178]: E0120 01:44:28.573399 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n8z22" podUID="e68b55a2-bd34-4f7b-b8c9-be9ad16a2026" Jan 20 01:44:28.798011 sshd[6481]: Accepted publickey for core from 10.200.16.10 port 37392 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:44:28.799334 sshd[6481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:28.803075 systemd-logind[1712]: New session 22 of user core. Jan 20 01:44:28.813030 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 01:44:29.254854 sshd[6481]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:29.259623 systemd[1]: sshd@19-10.200.20.33:22-10.200.16.10:37392.service: Deactivated successfully. Jan 20 01:44:29.262474 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 01:44:29.266061 systemd-logind[1712]: Session 22 logged out. Waiting for processes to exit. Jan 20 01:44:29.266892 systemd-logind[1712]: Removed session 22. Jan 20 01:44:31.573200 kubelet[3178]: E0120 01:44:31.573154 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bc8ccf7c-bn7qp" podUID="64bfc6ad-94c7-4fd0-8f5e-dd6a18f5f9f7" Jan 20 01:44:33.569953 kubelet[3178]: E0120 01:44:33.569857 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9899c86f9-k8n9d" podUID="106a00f0-806f-4139-9f3e-5722fa42f199" Jan 20 01:44:34.339878 systemd[1]: Started sshd@20-10.200.20.33:22-10.200.16.10:42892.service - OpenSSH per-connection server daemon (10.200.16.10:42892). Jan 20 01:44:34.791684 sshd[6493]: Accepted publickey for core from 10.200.16.10 port 42892 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:44:34.793197 sshd[6493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:34.800097 systemd-logind[1712]: New session 23 of user core. Jan 20 01:44:34.804444 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 01:44:35.184128 sshd[6493]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:35.187492 systemd[1]: sshd@20-10.200.20.33:22-10.200.16.10:42892.service: Deactivated successfully. Jan 20 01:44:35.189123 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 01:44:35.189695 systemd-logind[1712]: Session 23 logged out. Waiting for processes to exit. Jan 20 01:44:35.190494 systemd-logind[1712]: Removed session 23. Jan 20 01:44:35.571383 kubelet[3178]: E0120 01:44:35.571338 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bwmf9" podUID="8a0da74e-5da6-4d17-baef-898cc44d92e7" Jan 20 01:44:38.570127 kubelet[3178]: E0120 01:44:38.569824 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9899c86f9-mvhh6" podUID="0e893c04-8b87-47f0-b2a8-981971a5bfc3" Jan 20 01:44:40.289527 systemd[1]: Started sshd@21-10.200.20.33:22-10.200.16.10:44158.service - OpenSSH per-connection server daemon (10.200.16.10:44158). Jan 20 01:44:40.573619 kubelet[3178]: E0120 01:44:40.572436 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d975bd6cf-mbc2x" podUID="0e8d57de-35ca-4ff1-828c-b0edcfa72a11" Jan 20 01:44:40.574842 kubelet[3178]: E0120 01:44:40.574789 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n8z22" podUID="e68b55a2-bd34-4f7b-b8c9-be9ad16a2026" Jan 20 01:44:40.576448 kubelet[3178]: E0120 01:44:40.576419 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cd848bd58-mqt54" podUID="f24ea947-18f0-4003-bcc7-bb3d7376a6ba" Jan 20 01:44:40.782706 sshd[6507]: Accepted publickey for core from 10.200.16.10 port 44158 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:44:40.784203 sshd[6507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:40.788167 systemd-logind[1712]: New session 24 of user core. Jan 20 01:44:40.794317 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 01:44:41.219545 sshd[6507]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:41.223525 systemd-logind[1712]: Session 24 logged out. Waiting for processes to exit. Jan 20 01:44:41.223752 systemd[1]: sshd@21-10.200.20.33:22-10.200.16.10:44158.service: Deactivated successfully. Jan 20 01:44:41.227669 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 01:44:41.228736 systemd-logind[1712]: Removed session 24. Jan 20 01:44:43.570281 kubelet[3178]: E0120 01:44:43.570171 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bc8ccf7c-bn7qp" podUID="64bfc6ad-94c7-4fd0-8f5e-dd6a18f5f9f7" Jan 20 01:44:46.302749 systemd[1]: Started sshd@22-10.200.20.33:22-10.200.16.10:44162.service - OpenSSH per-connection server daemon (10.200.16.10:44162). Jan 20 01:44:46.760643 sshd[6520]: Accepted publickey for core from 10.200.16.10 port 44162 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:44:46.762460 sshd[6520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:46.767964 systemd-logind[1712]: New session 25 of user core. Jan 20 01:44:46.773320 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 01:44:47.166422 sshd[6520]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:47.171294 systemd[1]: sshd@22-10.200.20.33:22-10.200.16.10:44162.service: Deactivated successfully. Jan 20 01:44:47.173970 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 01:44:47.176507 systemd-logind[1712]: Session 25 logged out. Waiting for processes to exit. Jan 20 01:44:47.177813 systemd-logind[1712]: Removed session 25. Jan 20 01:44:47.569755 kubelet[3178]: E0120 01:44:47.569707 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9899c86f9-k8n9d" podUID="106a00f0-806f-4139-9f3e-5722fa42f199" Jan 20 01:44:49.570462 kubelet[3178]: E0120 01:44:49.570137 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bwmf9" podUID="8a0da74e-5da6-4d17-baef-898cc44d92e7" Jan 20 01:44:52.256204 systemd[1]: Started sshd@23-10.200.20.33:22-10.200.16.10:47160.service - OpenSSH per-connection server daemon (10.200.16.10:47160). Jan 20 01:44:52.571938 kubelet[3178]: E0120 01:44:52.571545 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n8z22" podUID="e68b55a2-bd34-4f7b-b8c9-be9ad16a2026" Jan 20 01:44:52.754780 sshd[6534]: Accepted publickey for core from 10.200.16.10 port 47160 ssh2: RSA SHA256:9VnFHVIcaU86IF+Fu1J0TZ+/QMOFdNW4+iVqovVN6CM Jan 20 01:44:52.756508 sshd[6534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:52.763568 systemd-logind[1712]: New session 26 of user core. Jan 20 01:44:52.768056 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 01:44:53.188229 sshd[6534]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:53.193344 systemd[1]: sshd@23-10.200.20.33:22-10.200.16.10:47160.service: Deactivated successfully. Jan 20 01:44:53.197514 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 01:44:53.198450 systemd-logind[1712]: Session 26 logged out. Waiting for processes to exit. Jan 20 01:44:53.199800 systemd-logind[1712]: Removed session 26. Jan 20 01:44:53.570350 kubelet[3178]: E0120 01:44:53.570312 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9899c86f9-mvhh6" podUID="0e893c04-8b87-47f0-b2a8-981971a5bfc3" Jan 20 01:44:55.036775 systemd[1]: run-containerd-runc-k8s.io-da104cf6abc9d1b34e7bb216db1c65190367537b3501f2e1830ace0a0e2de13a-runc.AGXD6D.mount: Deactivated successfully. Jan 20 01:44:55.571792 kubelet[3178]: E0120 01:44:55.570949 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cd848bd58-mqt54" podUID="f24ea947-18f0-4003-bcc7-bb3d7376a6ba" Jan 20 01:44:55.571792 kubelet[3178]: E0120 01:44:55.571725 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d975bd6cf-mbc2x" podUID="0e8d57de-35ca-4ff1-828c-b0edcfa72a11" Jan 20 01:44:56.571683 kubelet[3178]: E0120 01:44:56.571635 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bc8ccf7c-bn7qp" podUID="64bfc6ad-94c7-4fd0-8f5e-dd6a18f5f9f7"