Jan 17 00:00:04.208988 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 17 00:00:04.209010 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 16 22:28:08 -00 2026 Jan 17 00:00:04.209018 kernel: KASLR enabled Jan 17 00:00:04.209024 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 17 00:00:04.209032 kernel: printk: bootconsole [pl11] enabled Jan 17 00:00:04.209037 kernel: efi: EFI v2.7 by EDK II Jan 17 00:00:04.209045 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 17 00:00:04.209051 kernel: random: crng init done Jan 17 00:00:04.209057 kernel: ACPI: Early table checksum verification disabled Jan 17 00:00:04.209063 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 17 00:00:04.209069 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:00:04.209075 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:00:04.209083 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 17 00:00:04.209089 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:00:04.209096 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:00:04.209103 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:00:04.209109 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:00:04.209117 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:00:04.209124 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:00:04.209130 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 17 00:00:04.209137 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:00:04.209143 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 17 00:00:04.209150 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 17 00:00:04.209156 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 17 00:00:04.209163 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 17 00:00:04.209169 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 17 00:00:04.209176 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 17 00:00:04.209182 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 17 00:00:04.209190 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 17 00:00:04.209196 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 17 00:00:04.209203 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 17 00:00:04.209209 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 17 00:00:04.209215 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 17 00:00:04.209222 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 17 00:00:04.209228 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 17 00:00:04.209235 kernel: Zone ranges: Jan 17 00:00:04.209241 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 17 00:00:04.209247 kernel: DMA32 empty Jan 17 00:00:04.209254 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 17 00:00:04.209260 kernel: Movable zone start for each node Jan 17 00:00:04.209271 kernel: Early memory node ranges Jan 17 00:00:04.209278 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 17 00:00:04.209285 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 17 00:00:04.209292 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 17 00:00:04.209299 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 17 00:00:04.209307 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 17 00:00:04.209314 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 17 00:00:04.209320 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 17 00:00:04.209328 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 17 00:00:04.209334 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 17 00:00:04.209341 kernel: psci: probing for conduit method from ACPI. Jan 17 00:00:04.209348 kernel: psci: PSCIv1.1 detected in firmware. Jan 17 00:00:04.209355 kernel: psci: Using standard PSCI v0.2 function IDs Jan 17 00:00:04.209362 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 17 00:00:04.209368 kernel: psci: SMC Calling Convention v1.4 Jan 17 00:00:04.209375 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 17 00:00:04.209382 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 17 00:00:04.209390 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 17 00:00:04.209397 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 17 00:00:04.209404 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 17 00:00:04.209411 kernel: Detected PIPT I-cache on CPU0 Jan 17 00:00:04.211458 kernel: CPU features: detected: GIC system register CPU interface Jan 17 00:00:04.211474 kernel: CPU features: detected: Hardware dirty bit management Jan 17 00:00:04.211482 kernel: CPU features: detected: Spectre-BHB Jan 17 00:00:04.211489 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 17 00:00:04.211528 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 17 00:00:04.211538 kernel: CPU features: detected: ARM erratum 1418040 Jan 17 00:00:04.211546 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 17 00:00:04.211559 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 17 00:00:04.211566 kernel: alternatives: applying boot alternatives Jan 17 00:00:04.211605 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 17 00:00:04.211613 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:00:04.211620 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:00:04.211627 kernel: Fallback order for Node 0: 0 Jan 17 00:00:04.211634 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 17 00:00:04.211677 kernel: Policy zone: Normal Jan 17 00:00:04.211685 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:00:04.211692 kernel: software IO TLB: area num 2. Jan 17 00:00:04.211699 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 17 00:00:04.211709 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Jan 17 00:00:04.211716 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:00:04.211723 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:00:04.211731 kernel: rcu: RCU event tracing is enabled. Jan 17 00:00:04.211738 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:00:04.211745 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:00:04.211752 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:00:04.211759 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:00:04.211766 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:00:04.211773 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 17 00:00:04.211780 kernel: GICv3: 960 SPIs implemented Jan 17 00:00:04.211788 kernel: GICv3: 0 Extended SPIs implemented Jan 17 00:00:04.211795 kernel: Root IRQ handler: gic_handle_irq Jan 17 00:00:04.211802 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 17 00:00:04.211809 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 17 00:00:04.211816 kernel: ITS: No ITS available, not enabling LPIs Jan 17 00:00:04.211823 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:00:04.211830 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 00:00:04.211837 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 17 00:00:04.211844 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 17 00:00:04.211852 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 17 00:00:04.211859 kernel: Console: colour dummy device 80x25 Jan 17 00:00:04.211867 kernel: printk: console [tty1] enabled Jan 17 00:00:04.211875 kernel: ACPI: Core revision 20230628 Jan 17 00:00:04.211882 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 17 00:00:04.211889 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:00:04.211896 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:00:04.211904 kernel: landlock: Up and running. Jan 17 00:00:04.211910 kernel: SELinux: Initializing. Jan 17 00:00:04.211918 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:00:04.211925 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:00:04.211934 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:00:04.211941 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:00:04.211948 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Jan 17 00:00:04.211955 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 17 00:00:04.211962 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 17 00:00:04.211969 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:00:04.211976 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:00:04.211984 kernel: Remapping and enabling EFI services. Jan 17 00:00:04.211997 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:00:04.212005 kernel: Detected PIPT I-cache on CPU1 Jan 17 00:00:04.212013 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 17 00:00:04.212020 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 00:00:04.212031 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 17 00:00:04.212038 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:00:04.212046 kernel: SMP: Total of 2 processors activated. Jan 17 00:00:04.212053 kernel: CPU features: detected: 32-bit EL0 Support Jan 17 00:00:04.212061 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 17 00:00:04.212070 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 17 00:00:04.212078 kernel: CPU features: detected: CRC32 instructions Jan 17 00:00:04.212085 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 17 00:00:04.212092 kernel: CPU features: detected: LSE atomic instructions Jan 17 00:00:04.212100 kernel: CPU features: detected: Privileged Access Never Jan 17 00:00:04.212107 kernel: CPU: All CPU(s) started at EL1 Jan 17 00:00:04.212114 kernel: alternatives: applying system-wide alternatives Jan 17 00:00:04.212122 kernel: devtmpfs: initialized Jan 17 00:00:04.212129 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:00:04.212138 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:00:04.212146 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:00:04.212153 kernel: SMBIOS 3.1.0 present. Jan 17 00:00:04.212161 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 17 00:00:04.212168 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:00:04.212176 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 17 00:00:04.212183 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 17 00:00:04.212191 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 17 00:00:04.212198 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:00:04.212207 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 17 00:00:04.212215 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:00:04.212222 kernel: cpuidle: using governor menu Jan 17 00:00:04.212230 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 17 00:00:04.212237 kernel: ASID allocator initialised with 32768 entries Jan 17 00:00:04.212245 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:00:04.212252 kernel: Serial: AMBA PL011 UART driver Jan 17 00:00:04.212260 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 17 00:00:04.212267 kernel: Modules: 0 pages in range for non-PLT usage Jan 17 00:00:04.212276 kernel: Modules: 509008 pages in range for PLT usage Jan 17 00:00:04.212284 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:00:04.212291 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:00:04.212299 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 17 00:00:04.212306 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 17 00:00:04.212314 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:00:04.212321 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:00:04.212329 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 17 00:00:04.212336 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 17 00:00:04.212345 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:00:04.212363 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:00:04.212370 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:00:04.212378 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:00:04.212385 kernel: ACPI: Interpreter enabled Jan 17 00:00:04.212393 kernel: ACPI: Using GIC for interrupt routing Jan 17 00:00:04.212400 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 17 00:00:04.212408 kernel: printk: console [ttyAMA0] enabled Jan 17 00:00:04.212415 kernel: printk: bootconsole [pl11] disabled Jan 17 00:00:04.212442 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 17 00:00:04.212450 kernel: iommu: Default domain type: Translated Jan 17 00:00:04.212457 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 17 00:00:04.212464 kernel: efivars: Registered efivars operations Jan 17 00:00:04.212472 kernel: vgaarb: loaded Jan 17 00:00:04.212479 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 17 00:00:04.212487 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:00:04.212494 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:00:04.212502 kernel: pnp: PnP ACPI init Jan 17 00:00:04.212511 kernel: pnp: PnP ACPI: found 0 devices Jan 17 00:00:04.212518 kernel: NET: Registered PF_INET protocol family Jan 17 00:00:04.212526 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:00:04.212533 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 00:00:04.212541 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:00:04.212548 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:00:04.212556 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 00:00:04.212563 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 00:00:04.212571 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:00:04.212580 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:00:04.212588 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:00:04.212595 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:00:04.212602 kernel: kvm [1]: HYP mode not available Jan 17 00:00:04.212610 kernel: Initialise system trusted keyrings Jan 17 00:00:04.212617 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 00:00:04.212625 kernel: Key type asymmetric registered Jan 17 00:00:04.212632 kernel: Asymmetric key parser 'x509' registered Jan 17 00:00:04.212639 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 17 00:00:04.212649 kernel: io scheduler mq-deadline registered Jan 17 00:00:04.212656 kernel: io scheduler kyber registered Jan 17 00:00:04.212663 kernel: io scheduler bfq registered Jan 17 00:00:04.212671 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:00:04.212678 kernel: thunder_xcv, ver 1.0 Jan 17 00:00:04.212685 kernel: thunder_bgx, ver 1.0 Jan 17 00:00:04.212693 kernel: nicpf, ver 1.0 Jan 17 00:00:04.212700 kernel: nicvf, ver 1.0 Jan 17 00:00:04.212856 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 17 00:00:04.212938 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-17T00:00:03 UTC (1768608003) Jan 17 00:00:04.212949 kernel: efifb: probing for efifb Jan 17 00:00:04.212956 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 17 00:00:04.212964 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 17 00:00:04.212971 kernel: efifb: scrolling: redraw Jan 17 00:00:04.212979 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 17 00:00:04.212986 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 00:00:04.212994 kernel: fb0: EFI VGA frame buffer device Jan 17 00:00:04.213003 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 17 00:00:04.213011 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 00:00:04.213019 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Jan 17 00:00:04.213026 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 17 00:00:04.213034 kernel: watchdog: Hard watchdog permanently disabled Jan 17 00:00:04.213041 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:00:04.213049 kernel: Segment Routing with IPv6 Jan 17 00:00:04.213056 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:00:04.213063 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:00:04.213072 kernel: Key type dns_resolver registered Jan 17 00:00:04.213080 kernel: registered taskstats version 1 Jan 17 00:00:04.213087 kernel: Loading compiled-in X.509 certificates Jan 17 00:00:04.213095 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 0aabad27df82424bfffc9b1a502a9ae84b35bad4' Jan 17 00:00:04.213103 kernel: Key type .fscrypt registered Jan 17 00:00:04.213110 kernel: Key type fscrypt-provisioning registered Jan 17 00:00:04.213117 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:00:04.213125 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:00:04.213132 kernel: ima: No architecture policies found Jan 17 00:00:04.213141 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 17 00:00:04.213148 kernel: clk: Disabling unused clocks Jan 17 00:00:04.213156 kernel: Freeing unused kernel memory: 39424K Jan 17 00:00:04.213163 kernel: Run /init as init process Jan 17 00:00:04.213171 kernel: with arguments: Jan 17 00:00:04.213178 kernel: /init Jan 17 00:00:04.213185 kernel: with environment: Jan 17 00:00:04.213192 kernel: HOME=/ Jan 17 00:00:04.213200 kernel: TERM=linux Jan 17 00:00:04.213209 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:00:04.213221 systemd[1]: Detected virtualization microsoft. Jan 17 00:00:04.213229 systemd[1]: Detected architecture arm64. Jan 17 00:00:04.213237 systemd[1]: Running in initrd. Jan 17 00:00:04.213244 systemd[1]: No hostname configured, using default hostname. Jan 17 00:00:04.213252 systemd[1]: Hostname set to . Jan 17 00:00:04.213260 systemd[1]: Initializing machine ID from random generator. Jan 17 00:00:04.213270 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:00:04.213278 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:00:04.213286 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:00:04.213295 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:00:04.213304 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:00:04.213312 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:00:04.213320 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:00:04.213330 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:00:04.213340 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:00:04.213348 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:00:04.213356 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:00:04.213364 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:00:04.213372 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:00:04.213380 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:00:04.213388 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:00:04.213397 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:00:04.213407 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:00:04.213415 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:00:04.215459 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:00:04.215470 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:00:04.215478 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:00:04.215487 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:00:04.215495 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:00:04.215504 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:00:04.215517 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:00:04.215525 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:00:04.215533 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:00:04.215541 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:00:04.215574 systemd-journald[218]: Collecting audit messages is disabled. Jan 17 00:00:04.215597 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:00:04.215605 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:00:04.215615 systemd-journald[218]: Journal started Jan 17 00:00:04.215634 systemd-journald[218]: Runtime Journal (/run/log/journal/b186a0fc903f475cb88a91c7286d357e) is 8.0M, max 78.5M, 70.5M free. Jan 17 00:00:04.223584 systemd-modules-load[219]: Inserted module 'overlay' Jan 17 00:00:04.239446 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:00:04.249198 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:00:04.264467 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:00:04.264490 kernel: Bridge firewalling registered Jan 17 00:00:04.257831 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:00:04.261449 systemd-modules-load[219]: Inserted module 'br_netfilter' Jan 17 00:00:04.270295 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:00:04.285435 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:00:04.292093 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:00:04.309863 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:00:04.316590 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:00:04.329259 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:00:04.350582 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:00:04.358537 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:00:04.375931 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:00:04.381029 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:00:04.400616 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:00:04.410566 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:00:04.420919 dracut-cmdline[251]: dracut-dracut-053 Jan 17 00:00:04.420919 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 17 00:00:04.425669 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:00:04.434762 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:00:04.481004 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:00:04.507786 systemd-resolved[281]: Positive Trust Anchors: Jan 17 00:00:04.507803 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:00:04.507834 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:00:04.510519 systemd-resolved[281]: Defaulting to hostname 'linux'. Jan 17 00:00:04.511313 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:00:04.517308 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:00:04.593440 kernel: SCSI subsystem initialized Jan 17 00:00:04.600426 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:00:04.610437 kernel: iscsi: registered transport (tcp) Jan 17 00:00:04.627078 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:00:04.627110 kernel: QLogic iSCSI HBA Driver Jan 17 00:00:04.665128 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:00:04.677530 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:00:04.718654 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:00:04.718702 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:00:04.723908 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:00:04.771442 kernel: raid6: neonx8 gen() 15793 MB/s Jan 17 00:00:04.790435 kernel: raid6: neonx4 gen() 15698 MB/s Jan 17 00:00:04.810425 kernel: raid6: neonx2 gen() 13258 MB/s Jan 17 00:00:04.829424 kernel: raid6: neonx1 gen() 10489 MB/s Jan 17 00:00:04.848424 kernel: raid6: int64x8 gen() 6978 MB/s Jan 17 00:00:04.869425 kernel: raid6: int64x4 gen() 7357 MB/s Jan 17 00:00:04.888424 kernel: raid6: int64x2 gen() 6147 MB/s Jan 17 00:00:04.910452 kernel: raid6: int64x1 gen() 5071 MB/s Jan 17 00:00:04.910473 kernel: raid6: using algorithm neonx8 gen() 15793 MB/s Jan 17 00:00:04.934548 kernel: raid6: .... xor() 12042 MB/s, rmw enabled Jan 17 00:00:04.934560 kernel: raid6: using neon recovery algorithm Jan 17 00:00:04.944251 kernel: xor: measuring software checksum speed Jan 17 00:00:04.944265 kernel: 8regs : 19750 MB/sec Jan 17 00:00:04.947142 kernel: 32regs : 19660 MB/sec Jan 17 00:00:04.953161 kernel: arm64_neon : 26238 MB/sec Jan 17 00:00:04.953173 kernel: xor: using function: arm64_neon (26238 MB/sec) Jan 17 00:00:05.002431 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:00:05.012850 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:00:05.026548 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:00:05.046542 systemd-udevd[438]: Using default interface naming scheme 'v255'. Jan 17 00:00:05.050972 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:00:05.067587 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:00:05.080085 dracut-pre-trigger[446]: rd.md=0: removing MD RAID activation Jan 17 00:00:05.106413 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:00:05.119532 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:00:05.159632 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:00:05.174585 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:00:05.191330 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:00:05.205264 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:00:05.216302 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:00:05.226501 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:00:05.240592 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:00:05.259254 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:00:05.277434 kernel: hv_vmbus: Vmbus version:5.3 Jan 17 00:00:05.277588 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:00:05.281922 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:00:05.293970 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:00:05.308851 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:00:05.359949 kernel: hv_vmbus: registering driver hid_hyperv Jan 17 00:00:05.359975 kernel: hv_vmbus: registering driver hv_netvsc Jan 17 00:00:05.359985 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 17 00:00:05.359995 kernel: hv_vmbus: registering driver hv_storvsc Jan 17 00:00:05.360004 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 17 00:00:05.360013 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 17 00:00:05.360041 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 17 00:00:05.360061 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 17 00:00:05.309041 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:00:05.382349 kernel: scsi host0: storvsc_host_t Jan 17 00:00:05.382585 kernel: scsi host1: storvsc_host_t Jan 17 00:00:05.382705 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 17 00:00:05.382817 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 17 00:00:05.322203 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:00:05.397697 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 17 00:00:05.385786 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:00:05.404894 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:00:05.404993 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:00:05.427020 kernel: PTP clock support registered Jan 17 00:00:05.429619 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:00:05.164083 kernel: hv_utils: Registering HyperV Utility Driver Jan 17 00:00:05.177684 kernel: hv_vmbus: registering driver hv_utils Jan 17 00:00:05.177701 kernel: hv_utils: Heartbeat IC version 3.0 Jan 17 00:00:05.177709 kernel: hv_utils: Shutdown IC version 3.2 Jan 17 00:00:05.177720 kernel: hv_utils: TimeSync IC version 4.0 Jan 17 00:00:05.177728 kernel: hv_netvsc 7ced8dd4-7067-7ced-8dd4-70677ced8dd4 eth0: VF slot 1 added Jan 17 00:00:05.177850 systemd-journald[218]: Time jumped backwards, rotating. Jan 17 00:00:05.177887 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 17 00:00:05.162294 systemd-resolved[281]: Clock change detected. Flushing caches. Jan 17 00:00:05.201600 kernel: hv_vmbus: registering driver hv_pci Jan 17 00:00:05.201619 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 17 00:00:05.201784 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 00:00:05.177311 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:00:05.246880 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 17 00:00:05.247072 kernel: hv_pci 1e7ef999-e6fd-45c3-8a89-29fc06e9ff9d: PCI VMBus probing: Using version 0x10004 Jan 17 00:00:05.247196 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 17 00:00:05.247301 kernel: hv_pci 1e7ef999-e6fd-45c3-8a89-29fc06e9ff9d: PCI host bridge to bus e6fd:00 Jan 17 00:00:05.247386 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 17 00:00:05.247480 kernel: pci_bus e6fd:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 17 00:00:05.247579 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 17 00:00:05.247672 kernel: pci_bus e6fd:00: No busn resource found for root bus, will use [bus 00-ff] Jan 17 00:00:05.247755 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:00:05.227682 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:00:05.268055 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 17 00:00:05.268237 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 17 00:00:05.268345 kernel: pci e6fd:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 17 00:00:05.283197 kernel: pci e6fd:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 17 00:00:05.296554 kernel: pci e6fd:00:02.0: enabling Extended Tags Jan 17 00:00:05.296618 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#288 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 17 00:00:05.313335 kernel: pci e6fd:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at e6fd:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 17 00:00:05.317412 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:00:05.334131 kernel: pci_bus e6fd:00: busn_res: [bus 00-ff] end is updated to 00 Jan 17 00:00:05.334306 kernel: pci e6fd:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 17 00:00:05.346191 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#298 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 17 00:00:05.380580 kernel: mlx5_core e6fd:00:02.0: enabling device (0000 -> 0002) Jan 17 00:00:05.388857 kernel: mlx5_core e6fd:00:02.0: firmware version: 16.30.5026 Jan 17 00:00:05.581673 kernel: hv_netvsc 7ced8dd4-7067-7ced-8dd4-70677ced8dd4 eth0: VF registering: eth1 Jan 17 00:00:05.581870 kernel: mlx5_core e6fd:00:02.0 eth1: joined to eth0 Jan 17 00:00:05.589292 kernel: mlx5_core e6fd:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 17 00:00:05.600185 kernel: mlx5_core e6fd:00:02.0 enP59133s1: renamed from eth1 Jan 17 00:00:05.752544 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 17 00:00:05.804798 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 17 00:00:05.824180 kernel: BTRFS: device fsid 257557f7-4bf9-4b29-86df-93ad67770d31 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (483) Jan 17 00:00:05.841300 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (503) Jan 17 00:00:05.844757 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 17 00:00:05.853931 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 17 00:00:05.879268 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 17 00:00:05.896457 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:00:05.920216 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:00:05.928179 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:00:05.936197 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:00:06.939683 disk-uuid[610]: The operation has completed successfully. Jan 17 00:00:06.943732 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:00:07.012152 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:00:07.012275 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:00:07.032412 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:00:07.037418 sh[723]: Success Jan 17 00:00:07.074273 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 17 00:00:07.363105 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:00:07.370107 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:00:07.380311 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:00:07.407019 kernel: BTRFS info (device dm-0): first mount of filesystem 257557f7-4bf9-4b29-86df-93ad67770d31 Jan 17 00:00:07.407059 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:00:07.412727 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:00:07.416952 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:00:07.420662 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:00:07.702781 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:00:07.707134 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:00:07.724297 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:00:07.735332 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:00:07.761438 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:00:07.761484 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:00:07.764955 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:00:07.806435 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:00:07.813871 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:00:07.825244 kernel: BTRFS info (device sda6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:00:07.832931 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:00:07.848399 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:00:07.856665 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:00:07.885048 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:00:07.905809 systemd-networkd[911]: lo: Link UP Jan 17 00:00:07.905821 systemd-networkd[911]: lo: Gained carrier Jan 17 00:00:07.907433 systemd-networkd[911]: Enumeration completed Jan 17 00:00:07.907955 systemd-networkd[911]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:00:07.907958 systemd-networkd[911]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:00:07.909328 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:00:07.920367 systemd[1]: Reached target network.target - Network. Jan 17 00:00:07.992196 kernel: mlx5_core e6fd:00:02.0 enP59133s1: Link up Jan 17 00:00:08.026783 systemd-networkd[911]: enP59133s1: Link UP Jan 17 00:00:08.030297 kernel: hv_netvsc 7ced8dd4-7067-7ced-8dd4-70677ced8dd4 eth0: Data path switched to VF: enP59133s1 Jan 17 00:00:08.026881 systemd-networkd[911]: eth0: Link UP Jan 17 00:00:08.027705 systemd-networkd[911]: eth0: Gained carrier Jan 17 00:00:08.027715 systemd-networkd[911]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:00:08.034392 systemd-networkd[911]: enP59133s1: Gained carrier Jan 17 00:00:08.055221 systemd-networkd[911]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 17 00:00:08.924371 ignition[903]: Ignition 2.19.0 Jan 17 00:00:08.924381 ignition[903]: Stage: fetch-offline Jan 17 00:00:08.927407 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:00:08.924416 ignition[903]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:00:08.937431 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:00:08.924424 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:00:08.924512 ignition[903]: parsed url from cmdline: "" Jan 17 00:00:08.924515 ignition[903]: no config URL provided Jan 17 00:00:08.924519 ignition[903]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:00:08.924526 ignition[903]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:00:08.924530 ignition[903]: failed to fetch config: resource requires networking Jan 17 00:00:08.924683 ignition[903]: Ignition finished successfully Jan 17 00:00:08.960055 ignition[923]: Ignition 2.19.0 Jan 17 00:00:08.960062 ignition[923]: Stage: fetch Jan 17 00:00:08.962914 ignition[923]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:00:08.962931 ignition[923]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:00:08.963031 ignition[923]: parsed url from cmdline: "" Jan 17 00:00:08.963033 ignition[923]: no config URL provided Jan 17 00:00:08.963038 ignition[923]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:00:08.963045 ignition[923]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:00:08.963066 ignition[923]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 17 00:00:09.048255 ignition[923]: GET result: OK Jan 17 00:00:09.048333 ignition[923]: config has been read from IMDS userdata Jan 17 00:00:09.048383 ignition[923]: parsing config with SHA512: 67475a5a0c55cf8a9b4cd3c94f3e94950e62426bab8af3a08a5f040b62456eeb1ec22864bb7580c4ee48bcff473f47646bc01cb825569db43a250d9cd349ea9b Jan 17 00:00:09.052616 unknown[923]: fetched base config from "system" Jan 17 00:00:09.053137 ignition[923]: fetch: fetch complete Jan 17 00:00:09.052623 unknown[923]: fetched base config from "system" Jan 17 00:00:09.053142 ignition[923]: fetch: fetch passed Jan 17 00:00:09.052628 unknown[923]: fetched user config from "azure" Jan 17 00:00:09.053200 ignition[923]: Ignition finished successfully Jan 17 00:00:09.056210 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:00:09.080346 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:00:09.100911 ignition[929]: Ignition 2.19.0 Jan 17 00:00:09.100928 ignition[929]: Stage: kargs Jan 17 00:00:09.107759 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:00:09.101163 ignition[929]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:00:09.101206 ignition[929]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:00:09.102456 ignition[929]: kargs: kargs passed Jan 17 00:00:09.102516 ignition[929]: Ignition finished successfully Jan 17 00:00:09.128393 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:00:09.146313 ignition[935]: Ignition 2.19.0 Jan 17 00:00:09.146322 ignition[935]: Stage: disks Jan 17 00:00:09.149855 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:00:09.146504 ignition[935]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:00:09.156404 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:00:09.146515 ignition[935]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:00:09.165382 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:00:09.147432 ignition[935]: disks: disks passed Jan 17 00:00:09.174573 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:00:09.147486 ignition[935]: Ignition finished successfully Jan 17 00:00:09.183665 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:00:09.192780 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:00:09.218420 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:00:09.294183 systemd-fsck[943]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 17 00:00:09.301780 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:00:09.317357 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:00:09.374190 kernel: EXT4-fs (sda9): mounted filesystem b70ce012-b356-4603-a688-ee0b3b7de551 r/w with ordered data mode. Quota mode: none. Jan 17 00:00:09.374719 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:00:09.378925 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:00:09.418236 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:00:09.443229 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (954) Jan 17 00:00:09.453884 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:00:09.453919 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:00:09.454294 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:00:09.466005 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:00:09.466348 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 00:00:09.471596 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:00:09.504094 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:00:09.471629 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:00:09.478055 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:00:09.496355 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:00:09.518692 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:00:10.057348 coreos-metadata[969]: Jan 17 00:00:10.057 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 17 00:00:10.064649 coreos-metadata[969]: Jan 17 00:00:10.064 INFO Fetch successful Jan 17 00:00:10.064649 coreos-metadata[969]: Jan 17 00:00:10.064 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 17 00:00:10.079155 coreos-metadata[969]: Jan 17 00:00:10.079 INFO Fetch successful Jan 17 00:00:10.079155 coreos-metadata[969]: Jan 17 00:00:10.079 INFO wrote hostname ci-4081.3.6-n-e1db9b2d97 to /sysroot/etc/hostname Jan 17 00:00:10.081579 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:00:10.089344 systemd-networkd[911]: eth0: Gained IPv6LL Jan 17 00:00:10.289889 initrd-setup-root[983]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:00:10.328474 initrd-setup-root[990]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:00:10.349898 initrd-setup-root[997]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:00:10.355535 initrd-setup-root[1004]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:00:11.112363 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:00:11.124376 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:00:11.130746 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:00:11.151628 kernel: BTRFS info (device sda6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:00:11.149111 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:00:11.174857 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:00:11.187777 ignition[1073]: INFO : Ignition 2.19.0 Jan 17 00:00:11.187777 ignition[1073]: INFO : Stage: mount Jan 17 00:00:11.195095 ignition[1073]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:00:11.195095 ignition[1073]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:00:11.195095 ignition[1073]: INFO : mount: mount passed Jan 17 00:00:11.195095 ignition[1073]: INFO : Ignition finished successfully Jan 17 00:00:11.192729 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:00:11.212366 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:00:11.229133 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:00:11.250197 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1083) Jan 17 00:00:11.250254 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:00:11.261157 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:00:11.264884 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:00:11.272206 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:00:11.273489 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:00:11.299207 ignition[1099]: INFO : Ignition 2.19.0 Jan 17 00:00:11.299207 ignition[1099]: INFO : Stage: files Jan 17 00:00:11.299207 ignition[1099]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:00:11.299207 ignition[1099]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:00:11.315369 ignition[1099]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:00:11.315369 ignition[1099]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:00:11.315369 ignition[1099]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:00:11.349129 ignition[1099]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:00:11.355184 ignition[1099]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:00:11.355184 ignition[1099]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:00:11.349515 unknown[1099]: wrote ssh authorized keys file for user: core Jan 17 00:00:11.371662 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 00:00:11.371662 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 00:00:11.371662 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 17 00:00:11.371662 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 17 00:00:11.519075 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 00:00:11.853194 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 17 00:00:11.853194 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:00:11.869617 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:00:11.869617 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:00:11.869617 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:00:11.869617 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:00:11.869617 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:00:11.869617 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:00:11.869617 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:00:11.869617 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:00:11.869617 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:00:11.869617 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 17 00:00:11.869617 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 17 00:00:11.869617 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 17 00:00:11.869617 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 17 00:00:12.417419 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 00:00:12.678418 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 17 00:00:12.688154 ignition[1099]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 17 00:00:12.704746 ignition[1099]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 00:00:12.714950 ignition[1099]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 00:00:12.714950 ignition[1099]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 17 00:00:12.714950 ignition[1099]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 17 00:00:12.714950 ignition[1099]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:00:12.714950 ignition[1099]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:00:12.714950 ignition[1099]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 17 00:00:12.714950 ignition[1099]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:00:12.714950 ignition[1099]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:00:12.714950 ignition[1099]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:00:12.714950 ignition[1099]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:00:12.714950 ignition[1099]: INFO : files: files passed Jan 17 00:00:12.714950 ignition[1099]: INFO : Ignition finished successfully Jan 17 00:00:12.707395 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:00:12.737451 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:00:12.749364 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:00:12.764636 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:00:12.861482 initrd-setup-root-after-ignition[1127]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:00:12.861482 initrd-setup-root-after-ignition[1127]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:00:12.764729 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:00:12.880387 initrd-setup-root-after-ignition[1131]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:00:12.805260 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:00:12.818501 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:00:12.836412 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:00:12.895590 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:00:12.895690 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:00:12.903612 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:00:12.914156 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:00:12.923177 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:00:12.936303 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:00:12.960405 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:00:12.973696 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:00:12.990256 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:00:12.995780 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:00:13.006141 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:00:13.015512 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:00:13.015636 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:00:13.029125 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:00:13.034118 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:00:13.043410 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:00:13.052737 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:00:13.061741 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:00:13.071352 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:00:13.080682 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:00:13.091098 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:00:13.100150 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:00:13.110286 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:00:13.118576 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:00:13.118698 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:00:13.130733 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:00:13.135815 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:00:13.145322 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:00:13.149522 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:00:13.155292 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:00:13.155405 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:00:13.169670 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:00:13.169789 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:00:13.175491 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:00:13.175583 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:00:13.184059 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 00:00:13.184154 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:00:13.250852 ignition[1152]: INFO : Ignition 2.19.0 Jan 17 00:00:13.250852 ignition[1152]: INFO : Stage: umount Jan 17 00:00:13.250852 ignition[1152]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:00:13.250852 ignition[1152]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:00:13.210480 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:00:13.289127 ignition[1152]: INFO : umount: umount passed Jan 17 00:00:13.289127 ignition[1152]: INFO : Ignition finished successfully Jan 17 00:00:13.224353 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:00:13.224532 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:00:13.258045 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:00:13.266607 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:00:13.266767 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:00:13.278679 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:00:13.278784 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:00:13.297575 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:00:13.298383 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:00:13.298492 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:00:13.316347 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:00:13.316439 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:00:13.323237 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:00:13.323327 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:00:13.328944 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:00:13.328995 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:00:13.339712 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:00:13.339760 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:00:13.348074 systemd[1]: Stopped target network.target - Network. Jan 17 00:00:13.358060 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:00:13.358121 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:00:13.368941 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:00:13.377508 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:00:13.382095 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:00:13.387874 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:00:13.395711 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:00:13.406246 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:00:13.406292 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:00:13.419016 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:00:13.419061 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:00:13.429321 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:00:13.429371 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:00:13.438136 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:00:13.438184 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:00:13.447152 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:00:13.460779 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:00:13.475212 systemd-networkd[911]: eth0: DHCPv6 lease lost Jan 17 00:00:13.479768 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:00:13.479892 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:00:13.488809 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:00:13.488923 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:00:13.672288 kernel: hv_netvsc 7ced8dd4-7067-7ced-8dd4-70677ced8dd4 eth0: Data path switched from VF: enP59133s1 Jan 17 00:00:13.503947 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:00:13.504003 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:00:13.529381 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:00:13.538082 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:00:13.538141 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:00:13.547704 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:00:13.547750 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:00:13.557026 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:00:13.557069 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:00:13.566025 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:00:13.566066 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:00:13.575612 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:00:13.600787 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:00:13.602213 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:00:13.610377 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:00:13.610430 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:00:13.620873 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:00:13.620914 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:00:13.629533 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:00:13.629588 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:00:13.643308 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:00:13.643361 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:00:13.659504 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:00:13.659563 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:00:13.673369 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:00:13.686805 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:00:13.686865 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:00:13.697496 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:00:13.697541 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:00:13.707450 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:00:13.707555 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:00:13.716376 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:00:13.716455 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:00:13.727573 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:00:13.727663 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:00:13.737339 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:00:13.737446 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:00:13.746404 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:00:13.770402 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:00:13.980476 systemd[1]: Switching root. Jan 17 00:00:14.009699 systemd-journald[218]: Journal stopped Jan 17 00:00:04.208988 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 17 00:00:04.209010 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 16 22:28:08 -00 2026 Jan 17 00:00:04.209018 kernel: KASLR enabled Jan 17 00:00:04.209024 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 17 00:00:04.209032 kernel: printk: bootconsole [pl11] enabled Jan 17 00:00:04.209037 kernel: efi: EFI v2.7 by EDK II Jan 17 00:00:04.209045 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 17 00:00:04.209051 kernel: random: crng init done Jan 17 00:00:04.209057 kernel: ACPI: Early table checksum verification disabled Jan 17 00:00:04.209063 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 17 00:00:04.209069 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:00:04.209075 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:00:04.209083 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 17 00:00:04.209089 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:00:04.209096 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:00:04.209103 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:00:04.209109 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:00:04.209117 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:00:04.209124 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:00:04.209130 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 17 00:00:04.209137 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:00:04.209143 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 17 00:00:04.209150 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 17 00:00:04.209156 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 17 00:00:04.209163 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 17 00:00:04.209169 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 17 00:00:04.209176 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 17 00:00:04.209182 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 17 00:00:04.209190 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 17 00:00:04.209196 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 17 00:00:04.209203 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 17 00:00:04.209209 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 17 00:00:04.209215 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 17 00:00:04.209222 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 17 00:00:04.209228 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 17 00:00:04.209235 kernel: Zone ranges: Jan 17 00:00:04.209241 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 17 00:00:04.209247 kernel: DMA32 empty Jan 17 00:00:04.209254 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 17 00:00:04.209260 kernel: Movable zone start for each node Jan 17 00:00:04.209271 kernel: Early memory node ranges Jan 17 00:00:04.209278 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 17 00:00:04.209285 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 17 00:00:04.209292 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 17 00:00:04.209299 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 17 00:00:04.209307 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 17 00:00:04.209314 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 17 00:00:04.209320 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 17 00:00:04.209328 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 17 00:00:04.209334 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 17 00:00:04.209341 kernel: psci: probing for conduit method from ACPI. Jan 17 00:00:04.209348 kernel: psci: PSCIv1.1 detected in firmware. Jan 17 00:00:04.209355 kernel: psci: Using standard PSCI v0.2 function IDs Jan 17 00:00:04.209362 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 17 00:00:04.209368 kernel: psci: SMC Calling Convention v1.4 Jan 17 00:00:04.209375 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 17 00:00:04.209382 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 17 00:00:04.209390 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 17 00:00:04.209397 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 17 00:00:04.209404 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 17 00:00:04.209411 kernel: Detected PIPT I-cache on CPU0 Jan 17 00:00:04.211458 kernel: CPU features: detected: GIC system register CPU interface Jan 17 00:00:04.211474 kernel: CPU features: detected: Hardware dirty bit management Jan 17 00:00:04.211482 kernel: CPU features: detected: Spectre-BHB Jan 17 00:00:04.211489 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 17 00:00:04.211528 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 17 00:00:04.211538 kernel: CPU features: detected: ARM erratum 1418040 Jan 17 00:00:04.211546 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 17 00:00:04.211559 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 17 00:00:04.211566 kernel: alternatives: applying boot alternatives Jan 17 00:00:04.211605 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 17 00:00:04.211613 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:00:04.211620 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:00:04.211627 kernel: Fallback order for Node 0: 0 Jan 17 00:00:04.211634 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 17 00:00:04.211677 kernel: Policy zone: Normal Jan 17 00:00:04.211685 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:00:04.211692 kernel: software IO TLB: area num 2. Jan 17 00:00:04.211699 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 17 00:00:04.211709 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Jan 17 00:00:04.211716 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:00:04.211723 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:00:04.211731 kernel: rcu: RCU event tracing is enabled. Jan 17 00:00:04.211738 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:00:04.211745 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:00:04.211752 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:00:04.211759 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:00:04.211766 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:00:04.211773 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 17 00:00:04.211780 kernel: GICv3: 960 SPIs implemented Jan 17 00:00:04.211788 kernel: GICv3: 0 Extended SPIs implemented Jan 17 00:00:04.211795 kernel: Root IRQ handler: gic_handle_irq Jan 17 00:00:04.211802 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 17 00:00:04.211809 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 17 00:00:04.211816 kernel: ITS: No ITS available, not enabling LPIs Jan 17 00:00:04.211823 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:00:04.211830 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 00:00:04.211837 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 17 00:00:04.211844 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 17 00:00:04.211852 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 17 00:00:04.211859 kernel: Console: colour dummy device 80x25 Jan 17 00:00:04.211867 kernel: printk: console [tty1] enabled Jan 17 00:00:04.211875 kernel: ACPI: Core revision 20230628 Jan 17 00:00:04.211882 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 17 00:00:04.211889 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:00:04.211896 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:00:04.211904 kernel: landlock: Up and running. Jan 17 00:00:04.211910 kernel: SELinux: Initializing. Jan 17 00:00:04.211918 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:00:04.211925 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:00:04.211934 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:00:04.211941 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:00:04.211948 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Jan 17 00:00:04.211955 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 17 00:00:04.211962 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 17 00:00:04.211969 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:00:04.211976 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:00:04.211984 kernel: Remapping and enabling EFI services. Jan 17 00:00:04.211997 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:00:04.212005 kernel: Detected PIPT I-cache on CPU1 Jan 17 00:00:04.212013 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 17 00:00:04.212020 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 00:00:04.212031 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 17 00:00:04.212038 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:00:04.212046 kernel: SMP: Total of 2 processors activated. Jan 17 00:00:04.212053 kernel: CPU features: detected: 32-bit EL0 Support Jan 17 00:00:04.212061 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 17 00:00:04.212070 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 17 00:00:04.212078 kernel: CPU features: detected: CRC32 instructions Jan 17 00:00:04.212085 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 17 00:00:04.212092 kernel: CPU features: detected: LSE atomic instructions Jan 17 00:00:04.212100 kernel: CPU features: detected: Privileged Access Never Jan 17 00:00:04.212107 kernel: CPU: All CPU(s) started at EL1 Jan 17 00:00:04.212114 kernel: alternatives: applying system-wide alternatives Jan 17 00:00:04.212122 kernel: devtmpfs: initialized Jan 17 00:00:04.212129 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:00:04.212138 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:00:04.212146 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:00:04.212153 kernel: SMBIOS 3.1.0 present. Jan 17 00:00:04.212161 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 17 00:00:04.212168 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:00:04.212176 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 17 00:00:04.212183 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 17 00:00:04.212191 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 17 00:00:04.212198 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:00:04.212207 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 17 00:00:04.212215 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:00:04.212222 kernel: cpuidle: using governor menu Jan 17 00:00:04.212230 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 17 00:00:04.212237 kernel: ASID allocator initialised with 32768 entries Jan 17 00:00:04.212245 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:00:04.212252 kernel: Serial: AMBA PL011 UART driver Jan 17 00:00:04.212260 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 17 00:00:04.212267 kernel: Modules: 0 pages in range for non-PLT usage Jan 17 00:00:04.212276 kernel: Modules: 509008 pages in range for PLT usage Jan 17 00:00:04.212284 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:00:04.212291 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:00:04.212299 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 17 00:00:04.212306 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 17 00:00:04.212314 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:00:04.212321 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:00:04.212329 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 17 00:00:04.212336 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 17 00:00:04.212345 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:00:04.212363 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:00:04.212370 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:00:04.212378 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:00:04.212385 kernel: ACPI: Interpreter enabled Jan 17 00:00:04.212393 kernel: ACPI: Using GIC for interrupt routing Jan 17 00:00:04.212400 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 17 00:00:04.212408 kernel: printk: console [ttyAMA0] enabled Jan 17 00:00:04.212415 kernel: printk: bootconsole [pl11] disabled Jan 17 00:00:04.212442 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 17 00:00:04.212450 kernel: iommu: Default domain type: Translated Jan 17 00:00:04.212457 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 17 00:00:04.212464 kernel: efivars: Registered efivars operations Jan 17 00:00:04.212472 kernel: vgaarb: loaded Jan 17 00:00:04.212479 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 17 00:00:04.212487 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:00:04.212494 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:00:04.212502 kernel: pnp: PnP ACPI init Jan 17 00:00:04.212511 kernel: pnp: PnP ACPI: found 0 devices Jan 17 00:00:04.212518 kernel: NET: Registered PF_INET protocol family Jan 17 00:00:04.212526 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:00:04.212533 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 00:00:04.212541 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:00:04.212548 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:00:04.212556 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 00:00:04.212563 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 00:00:04.212571 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:00:04.212580 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:00:04.212588 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:00:04.212595 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:00:04.212602 kernel: kvm [1]: HYP mode not available Jan 17 00:00:04.212610 kernel: Initialise system trusted keyrings Jan 17 00:00:04.212617 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 00:00:04.212625 kernel: Key type asymmetric registered Jan 17 00:00:04.212632 kernel: Asymmetric key parser 'x509' registered Jan 17 00:00:04.212639 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 17 00:00:04.212649 kernel: io scheduler mq-deadline registered Jan 17 00:00:04.212656 kernel: io scheduler kyber registered Jan 17 00:00:04.212663 kernel: io scheduler bfq registered Jan 17 00:00:04.212671 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:00:04.212678 kernel: thunder_xcv, ver 1.0 Jan 17 00:00:04.212685 kernel: thunder_bgx, ver 1.0 Jan 17 00:00:04.212693 kernel: nicpf, ver 1.0 Jan 17 00:00:04.212700 kernel: nicvf, ver 1.0 Jan 17 00:00:04.212856 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 17 00:00:04.212938 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-17T00:00:03 UTC (1768608003) Jan 17 00:00:04.212949 kernel: efifb: probing for efifb Jan 17 00:00:04.212956 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 17 00:00:04.212964 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 17 00:00:04.212971 kernel: efifb: scrolling: redraw Jan 17 00:00:04.212979 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 17 00:00:04.212986 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 00:00:04.212994 kernel: fb0: EFI VGA frame buffer device Jan 17 00:00:04.213003 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 17 00:00:04.213011 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 00:00:04.213019 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Jan 17 00:00:04.213026 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 17 00:00:04.213034 kernel: watchdog: Hard watchdog permanently disabled Jan 17 00:00:04.213041 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:00:04.213049 kernel: Segment Routing with IPv6 Jan 17 00:00:04.213056 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:00:04.213063 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:00:04.213072 kernel: Key type dns_resolver registered Jan 17 00:00:04.213080 kernel: registered taskstats version 1 Jan 17 00:00:04.213087 kernel: Loading compiled-in X.509 certificates Jan 17 00:00:04.213095 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 0aabad27df82424bfffc9b1a502a9ae84b35bad4' Jan 17 00:00:04.213103 kernel: Key type .fscrypt registered Jan 17 00:00:04.213110 kernel: Key type fscrypt-provisioning registered Jan 17 00:00:04.213117 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:00:04.213125 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:00:04.213132 kernel: ima: No architecture policies found Jan 17 00:00:04.213141 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 17 00:00:04.213148 kernel: clk: Disabling unused clocks Jan 17 00:00:04.213156 kernel: Freeing unused kernel memory: 39424K Jan 17 00:00:04.213163 kernel: Run /init as init process Jan 17 00:00:04.213171 kernel: with arguments: Jan 17 00:00:04.213178 kernel: /init Jan 17 00:00:04.213185 kernel: with environment: Jan 17 00:00:04.213192 kernel: HOME=/ Jan 17 00:00:04.213200 kernel: TERM=linux Jan 17 00:00:04.213209 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:00:04.213221 systemd[1]: Detected virtualization microsoft. Jan 17 00:00:04.213229 systemd[1]: Detected architecture arm64. Jan 17 00:00:04.213237 systemd[1]: Running in initrd. Jan 17 00:00:04.213244 systemd[1]: No hostname configured, using default hostname. Jan 17 00:00:04.213252 systemd[1]: Hostname set to . Jan 17 00:00:04.213260 systemd[1]: Initializing machine ID from random generator. Jan 17 00:00:04.213270 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:00:04.213278 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:00:04.213286 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:00:04.213295 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:00:04.213304 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:00:04.213312 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:00:04.213320 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:00:04.213330 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:00:04.213340 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:00:04.213348 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:00:04.213356 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:00:04.213364 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:00:04.213372 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:00:04.213380 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:00:04.213388 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:00:04.213397 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:00:04.213407 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:00:04.213415 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:00:04.215459 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:00:04.215470 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:00:04.215478 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:00:04.215487 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:00:04.215495 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:00:04.215504 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:00:04.215517 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:00:04.215525 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:00:04.215533 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:00:04.215541 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:00:04.215574 systemd-journald[218]: Collecting audit messages is disabled. Jan 17 00:00:04.215597 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:00:04.215605 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:00:04.215615 systemd-journald[218]: Journal started Jan 17 00:00:04.215634 systemd-journald[218]: Runtime Journal (/run/log/journal/b186a0fc903f475cb88a91c7286d357e) is 8.0M, max 78.5M, 70.5M free. Jan 17 00:00:04.223584 systemd-modules-load[219]: Inserted module 'overlay' Jan 17 00:00:04.239446 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:00:04.249198 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:00:04.264467 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:00:04.264490 kernel: Bridge firewalling registered Jan 17 00:00:04.257831 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:00:04.261449 systemd-modules-load[219]: Inserted module 'br_netfilter' Jan 17 00:00:04.270295 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:00:04.285435 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:00:04.292093 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:00:04.309863 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:00:04.316590 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:00:04.329259 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:00:04.350582 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:00:04.358537 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:00:04.375931 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:00:04.381029 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:00:04.400616 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:00:04.410566 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:00:04.420919 dracut-cmdline[251]: dracut-dracut-053 Jan 17 00:00:04.420919 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 17 00:00:04.425669 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:00:04.434762 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:00:04.481004 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:00:04.507786 systemd-resolved[281]: Positive Trust Anchors: Jan 17 00:00:04.507803 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:00:04.507834 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:00:04.510519 systemd-resolved[281]: Defaulting to hostname 'linux'. Jan 17 00:00:04.511313 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:00:04.517308 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:00:04.593440 kernel: SCSI subsystem initialized Jan 17 00:00:04.600426 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:00:04.610437 kernel: iscsi: registered transport (tcp) Jan 17 00:00:04.627078 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:00:04.627110 kernel: QLogic iSCSI HBA Driver Jan 17 00:00:04.665128 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:00:04.677530 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:00:04.718654 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:00:04.718702 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:00:04.723908 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:00:04.771442 kernel: raid6: neonx8 gen() 15793 MB/s Jan 17 00:00:04.790435 kernel: raid6: neonx4 gen() 15698 MB/s Jan 17 00:00:04.810425 kernel: raid6: neonx2 gen() 13258 MB/s Jan 17 00:00:04.829424 kernel: raid6: neonx1 gen() 10489 MB/s Jan 17 00:00:04.848424 kernel: raid6: int64x8 gen() 6978 MB/s Jan 17 00:00:04.869425 kernel: raid6: int64x4 gen() 7357 MB/s Jan 17 00:00:04.888424 kernel: raid6: int64x2 gen() 6147 MB/s Jan 17 00:00:04.910452 kernel: raid6: int64x1 gen() 5071 MB/s Jan 17 00:00:04.910473 kernel: raid6: using algorithm neonx8 gen() 15793 MB/s Jan 17 00:00:04.934548 kernel: raid6: .... xor() 12042 MB/s, rmw enabled Jan 17 00:00:04.934560 kernel: raid6: using neon recovery algorithm Jan 17 00:00:04.944251 kernel: xor: measuring software checksum speed Jan 17 00:00:04.944265 kernel: 8regs : 19750 MB/sec Jan 17 00:00:04.947142 kernel: 32regs : 19660 MB/sec Jan 17 00:00:04.953161 kernel: arm64_neon : 26238 MB/sec Jan 17 00:00:04.953173 kernel: xor: using function: arm64_neon (26238 MB/sec) Jan 17 00:00:05.002431 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:00:05.012850 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:00:05.026548 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:00:05.046542 systemd-udevd[438]: Using default interface naming scheme 'v255'. Jan 17 00:00:05.050972 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:00:05.067587 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:00:05.080085 dracut-pre-trigger[446]: rd.md=0: removing MD RAID activation Jan 17 00:00:05.106413 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:00:05.119532 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:00:05.159632 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:00:05.174585 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:00:05.191330 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:00:05.205264 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:00:05.216302 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:00:05.226501 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:00:05.240592 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:00:05.259254 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:00:05.277434 kernel: hv_vmbus: Vmbus version:5.3 Jan 17 00:00:05.277588 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:00:05.281922 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:00:05.293970 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:00:05.308851 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:00:05.359949 kernel: hv_vmbus: registering driver hid_hyperv Jan 17 00:00:05.359975 kernel: hv_vmbus: registering driver hv_netvsc Jan 17 00:00:05.359985 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 17 00:00:05.359995 kernel: hv_vmbus: registering driver hv_storvsc Jan 17 00:00:05.360004 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 17 00:00:05.360013 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 17 00:00:05.360041 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 17 00:00:05.360061 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 17 00:00:05.309041 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:00:05.382349 kernel: scsi host0: storvsc_host_t Jan 17 00:00:05.382585 kernel: scsi host1: storvsc_host_t Jan 17 00:00:05.382705 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 17 00:00:05.382817 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 17 00:00:05.322203 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:00:05.397697 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 17 00:00:05.385786 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:00:05.404894 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:00:05.404993 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:00:05.427020 kernel: PTP clock support registered Jan 17 00:00:05.429619 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:00:05.164083 kernel: hv_utils: Registering HyperV Utility Driver Jan 17 00:00:05.177684 kernel: hv_vmbus: registering driver hv_utils Jan 17 00:00:05.177701 kernel: hv_utils: Heartbeat IC version 3.0 Jan 17 00:00:05.177709 kernel: hv_utils: Shutdown IC version 3.2 Jan 17 00:00:05.177720 kernel: hv_utils: TimeSync IC version 4.0 Jan 17 00:00:05.177728 kernel: hv_netvsc 7ced8dd4-7067-7ced-8dd4-70677ced8dd4 eth0: VF slot 1 added Jan 17 00:00:05.177850 systemd-journald[218]: Time jumped backwards, rotating. Jan 17 00:00:05.177887 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 17 00:00:05.162294 systemd-resolved[281]: Clock change detected. Flushing caches. Jan 17 00:00:05.201600 kernel: hv_vmbus: registering driver hv_pci Jan 17 00:00:05.201619 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 17 00:00:05.201784 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 00:00:05.177311 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:00:05.246880 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 17 00:00:05.247072 kernel: hv_pci 1e7ef999-e6fd-45c3-8a89-29fc06e9ff9d: PCI VMBus probing: Using version 0x10004 Jan 17 00:00:05.247196 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 17 00:00:05.247301 kernel: hv_pci 1e7ef999-e6fd-45c3-8a89-29fc06e9ff9d: PCI host bridge to bus e6fd:00 Jan 17 00:00:05.247386 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 17 00:00:05.247480 kernel: pci_bus e6fd:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 17 00:00:05.247579 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 17 00:00:05.247672 kernel: pci_bus e6fd:00: No busn resource found for root bus, will use [bus 00-ff] Jan 17 00:00:05.247755 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:00:05.227682 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:00:05.268055 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 17 00:00:05.268237 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 17 00:00:05.268345 kernel: pci e6fd:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 17 00:00:05.283197 kernel: pci e6fd:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 17 00:00:05.296554 kernel: pci e6fd:00:02.0: enabling Extended Tags Jan 17 00:00:05.296618 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#288 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 17 00:00:05.313335 kernel: pci e6fd:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at e6fd:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 17 00:00:05.317412 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:00:05.334131 kernel: pci_bus e6fd:00: busn_res: [bus 00-ff] end is updated to 00 Jan 17 00:00:05.334306 kernel: pci e6fd:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 17 00:00:05.346191 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#298 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 17 00:00:05.380580 kernel: mlx5_core e6fd:00:02.0: enabling device (0000 -> 0002) Jan 17 00:00:05.388857 kernel: mlx5_core e6fd:00:02.0: firmware version: 16.30.5026 Jan 17 00:00:05.581673 kernel: hv_netvsc 7ced8dd4-7067-7ced-8dd4-70677ced8dd4 eth0: VF registering: eth1 Jan 17 00:00:05.581870 kernel: mlx5_core e6fd:00:02.0 eth1: joined to eth0 Jan 17 00:00:05.589292 kernel: mlx5_core e6fd:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 17 00:00:05.600185 kernel: mlx5_core e6fd:00:02.0 enP59133s1: renamed from eth1 Jan 17 00:00:05.752544 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 17 00:00:05.804798 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 17 00:00:05.824180 kernel: BTRFS: device fsid 257557f7-4bf9-4b29-86df-93ad67770d31 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (483) Jan 17 00:00:05.841300 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (503) Jan 17 00:00:05.844757 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 17 00:00:05.853931 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 17 00:00:05.879268 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 17 00:00:05.896457 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:00:05.920216 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:00:05.928179 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:00:05.936197 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:00:06.939683 disk-uuid[610]: The operation has completed successfully. Jan 17 00:00:06.943732 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:00:07.012152 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:00:07.012275 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:00:07.032412 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:00:07.037418 sh[723]: Success Jan 17 00:00:07.074273 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 17 00:00:07.363105 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:00:07.370107 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:00:07.380311 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:00:07.407019 kernel: BTRFS info (device dm-0): first mount of filesystem 257557f7-4bf9-4b29-86df-93ad67770d31 Jan 17 00:00:07.407059 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:00:07.412727 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:00:07.416952 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:00:07.420662 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:00:07.702781 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:00:07.707134 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:00:07.724297 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:00:07.735332 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:00:07.761438 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:00:07.761484 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:00:07.764955 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:00:07.806435 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:00:07.813871 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:00:07.825244 kernel: BTRFS info (device sda6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:00:07.832931 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:00:07.848399 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:00:07.856665 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:00:07.885048 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:00:07.905809 systemd-networkd[911]: lo: Link UP Jan 17 00:00:07.905821 systemd-networkd[911]: lo: Gained carrier Jan 17 00:00:07.907433 systemd-networkd[911]: Enumeration completed Jan 17 00:00:07.907955 systemd-networkd[911]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:00:07.907958 systemd-networkd[911]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:00:07.909328 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:00:07.920367 systemd[1]: Reached target network.target - Network. Jan 17 00:00:07.992196 kernel: mlx5_core e6fd:00:02.0 enP59133s1: Link up Jan 17 00:00:08.026783 systemd-networkd[911]: enP59133s1: Link UP Jan 17 00:00:08.030297 kernel: hv_netvsc 7ced8dd4-7067-7ced-8dd4-70677ced8dd4 eth0: Data path switched to VF: enP59133s1 Jan 17 00:00:08.026881 systemd-networkd[911]: eth0: Link UP Jan 17 00:00:08.027705 systemd-networkd[911]: eth0: Gained carrier Jan 17 00:00:08.027715 systemd-networkd[911]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:00:08.034392 systemd-networkd[911]: enP59133s1: Gained carrier Jan 17 00:00:08.055221 systemd-networkd[911]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 17 00:00:08.924371 ignition[903]: Ignition 2.19.0 Jan 17 00:00:08.924381 ignition[903]: Stage: fetch-offline Jan 17 00:00:08.927407 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:00:08.924416 ignition[903]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:00:08.937431 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:00:08.924424 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:00:08.924512 ignition[903]: parsed url from cmdline: "" Jan 17 00:00:08.924515 ignition[903]: no config URL provided Jan 17 00:00:08.924519 ignition[903]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:00:08.924526 ignition[903]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:00:08.924530 ignition[903]: failed to fetch config: resource requires networking Jan 17 00:00:08.924683 ignition[903]: Ignition finished successfully Jan 17 00:00:08.960055 ignition[923]: Ignition 2.19.0 Jan 17 00:00:08.960062 ignition[923]: Stage: fetch Jan 17 00:00:08.962914 ignition[923]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:00:08.962931 ignition[923]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:00:08.963031 ignition[923]: parsed url from cmdline: "" Jan 17 00:00:08.963033 ignition[923]: no config URL provided Jan 17 00:00:08.963038 ignition[923]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:00:08.963045 ignition[923]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:00:08.963066 ignition[923]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 17 00:00:09.048255 ignition[923]: GET result: OK Jan 17 00:00:09.048333 ignition[923]: config has been read from IMDS userdata Jan 17 00:00:09.048383 ignition[923]: parsing config with SHA512: 67475a5a0c55cf8a9b4cd3c94f3e94950e62426bab8af3a08a5f040b62456eeb1ec22864bb7580c4ee48bcff473f47646bc01cb825569db43a250d9cd349ea9b Jan 17 00:00:09.052616 unknown[923]: fetched base config from "system" Jan 17 00:00:09.053137 ignition[923]: fetch: fetch complete Jan 17 00:00:09.052623 unknown[923]: fetched base config from "system" Jan 17 00:00:09.053142 ignition[923]: fetch: fetch passed Jan 17 00:00:09.052628 unknown[923]: fetched user config from "azure" Jan 17 00:00:09.053200 ignition[923]: Ignition finished successfully Jan 17 00:00:09.056210 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:00:09.080346 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:00:09.100911 ignition[929]: Ignition 2.19.0 Jan 17 00:00:09.100928 ignition[929]: Stage: kargs Jan 17 00:00:09.107759 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:00:09.101163 ignition[929]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:00:09.101206 ignition[929]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:00:09.102456 ignition[929]: kargs: kargs passed Jan 17 00:00:09.102516 ignition[929]: Ignition finished successfully Jan 17 00:00:09.128393 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:00:09.146313 ignition[935]: Ignition 2.19.0 Jan 17 00:00:09.146322 ignition[935]: Stage: disks Jan 17 00:00:09.149855 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:00:09.146504 ignition[935]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:00:09.156404 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:00:09.146515 ignition[935]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:00:09.165382 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:00:09.147432 ignition[935]: disks: disks passed Jan 17 00:00:09.174573 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:00:09.147486 ignition[935]: Ignition finished successfully Jan 17 00:00:09.183665 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:00:09.192780 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:00:09.218420 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:00:09.294183 systemd-fsck[943]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 17 00:00:09.301780 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:00:09.317357 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:00:09.374190 kernel: EXT4-fs (sda9): mounted filesystem b70ce012-b356-4603-a688-ee0b3b7de551 r/w with ordered data mode. Quota mode: none. Jan 17 00:00:09.374719 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:00:09.378925 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:00:09.418236 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:00:09.443229 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (954) Jan 17 00:00:09.453884 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:00:09.453919 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:00:09.454294 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:00:09.466005 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:00:09.466348 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 00:00:09.471596 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:00:09.504094 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:00:09.471629 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:00:09.478055 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:00:09.496355 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:00:09.518692 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:00:10.057348 coreos-metadata[969]: Jan 17 00:00:10.057 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 17 00:00:10.064649 coreos-metadata[969]: Jan 17 00:00:10.064 INFO Fetch successful Jan 17 00:00:10.064649 coreos-metadata[969]: Jan 17 00:00:10.064 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 17 00:00:10.079155 coreos-metadata[969]: Jan 17 00:00:10.079 INFO Fetch successful Jan 17 00:00:10.079155 coreos-metadata[969]: Jan 17 00:00:10.079 INFO wrote hostname ci-4081.3.6-n-e1db9b2d97 to /sysroot/etc/hostname Jan 17 00:00:10.081579 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:00:10.089344 systemd-networkd[911]: eth0: Gained IPv6LL Jan 17 00:00:10.289889 initrd-setup-root[983]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:00:10.328474 initrd-setup-root[990]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:00:10.349898 initrd-setup-root[997]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:00:10.355535 initrd-setup-root[1004]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:00:11.112363 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:00:11.124376 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:00:11.130746 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:00:11.151628 kernel: BTRFS info (device sda6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:00:11.149111 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:00:11.174857 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:00:11.187777 ignition[1073]: INFO : Ignition 2.19.0 Jan 17 00:00:11.187777 ignition[1073]: INFO : Stage: mount Jan 17 00:00:11.195095 ignition[1073]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:00:11.195095 ignition[1073]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:00:11.195095 ignition[1073]: INFO : mount: mount passed Jan 17 00:00:11.195095 ignition[1073]: INFO : Ignition finished successfully Jan 17 00:00:11.192729 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:00:11.212366 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:00:11.229133 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:00:11.250197 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1083) Jan 17 00:00:11.250254 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:00:11.261157 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:00:11.264884 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:00:11.272206 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:00:11.273489 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:00:11.299207 ignition[1099]: INFO : Ignition 2.19.0 Jan 17 00:00:11.299207 ignition[1099]: INFO : Stage: files Jan 17 00:00:11.299207 ignition[1099]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:00:11.299207 ignition[1099]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:00:11.315369 ignition[1099]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:00:11.315369 ignition[1099]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:00:11.315369 ignition[1099]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:00:11.349129 ignition[1099]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:00:11.355184 ignition[1099]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:00:11.355184 ignition[1099]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:00:11.349515 unknown[1099]: wrote ssh authorized keys file for user: core Jan 17 00:00:11.371662 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 00:00:11.371662 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 00:00:11.371662 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 17 00:00:11.371662 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 17 00:00:11.519075 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 00:00:11.853194 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 17 00:00:11.853194 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:00:11.869617 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:00:11.869617 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:00:11.869617 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:00:11.869617 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:00:11.869617 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:00:11.869617 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:00:11.869617 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:00:11.869617 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:00:11.869617 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:00:11.869617 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 17 00:00:11.869617 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 17 00:00:11.869617 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 17 00:00:11.869617 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 17 00:00:12.417419 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 00:00:12.678418 ignition[1099]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 17 00:00:12.688154 ignition[1099]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 17 00:00:12.704746 ignition[1099]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 00:00:12.714950 ignition[1099]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 00:00:12.714950 ignition[1099]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 17 00:00:12.714950 ignition[1099]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 17 00:00:12.714950 ignition[1099]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:00:12.714950 ignition[1099]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:00:12.714950 ignition[1099]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 17 00:00:12.714950 ignition[1099]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:00:12.714950 ignition[1099]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:00:12.714950 ignition[1099]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:00:12.714950 ignition[1099]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:00:12.714950 ignition[1099]: INFO : files: files passed Jan 17 00:00:12.714950 ignition[1099]: INFO : Ignition finished successfully Jan 17 00:00:12.707395 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:00:12.737451 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:00:12.749364 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:00:12.764636 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:00:12.861482 initrd-setup-root-after-ignition[1127]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:00:12.861482 initrd-setup-root-after-ignition[1127]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:00:12.764729 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:00:12.880387 initrd-setup-root-after-ignition[1131]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:00:12.805260 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:00:12.818501 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:00:12.836412 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:00:12.895590 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:00:12.895690 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:00:12.903612 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:00:12.914156 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:00:12.923177 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:00:12.936303 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:00:12.960405 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:00:12.973696 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:00:12.990256 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:00:12.995780 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:00:13.006141 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:00:13.015512 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:00:13.015636 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:00:13.029125 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:00:13.034118 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:00:13.043410 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:00:13.052737 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:00:13.061741 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:00:13.071352 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:00:13.080682 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:00:13.091098 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:00:13.100150 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:00:13.110286 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:00:13.118576 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:00:13.118698 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:00:13.130733 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:00:13.135815 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:00:13.145322 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:00:13.149522 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:00:13.155292 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:00:13.155405 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:00:13.169670 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:00:13.169789 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:00:13.175491 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:00:13.175583 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:00:13.184059 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 00:00:13.184154 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:00:13.250852 ignition[1152]: INFO : Ignition 2.19.0 Jan 17 00:00:13.250852 ignition[1152]: INFO : Stage: umount Jan 17 00:00:13.250852 ignition[1152]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:00:13.250852 ignition[1152]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:00:13.210480 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:00:13.289127 ignition[1152]: INFO : umount: umount passed Jan 17 00:00:13.289127 ignition[1152]: INFO : Ignition finished successfully Jan 17 00:00:13.224353 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:00:13.224532 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:00:13.258045 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:00:13.266607 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:00:13.266767 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:00:13.278679 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:00:13.278784 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:00:13.297575 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:00:13.298383 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:00:13.298492 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:00:13.316347 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:00:13.316439 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:00:13.323237 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:00:13.323327 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:00:13.328944 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:00:13.328995 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:00:13.339712 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:00:13.339760 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:00:13.348074 systemd[1]: Stopped target network.target - Network. Jan 17 00:00:13.358060 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:00:13.358121 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:00:13.368941 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:00:13.377508 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:00:13.382095 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:00:13.387874 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:00:13.395711 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:00:13.406246 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:00:13.406292 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:00:13.419016 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:00:13.419061 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:00:13.429321 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:00:13.429371 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:00:13.438136 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:00:13.438184 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:00:13.447152 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:00:13.460779 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:00:13.475212 systemd-networkd[911]: eth0: DHCPv6 lease lost Jan 17 00:00:13.479768 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:00:13.479892 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:00:13.488809 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:00:13.488923 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:00:13.672288 kernel: hv_netvsc 7ced8dd4-7067-7ced-8dd4-70677ced8dd4 eth0: Data path switched from VF: enP59133s1 Jan 17 00:00:13.503947 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:00:13.504003 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:00:13.529381 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:00:13.538082 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:00:13.538141 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:00:13.547704 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:00:13.547750 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:00:13.557026 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:00:13.557069 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:00:13.566025 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:00:13.566066 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:00:13.575612 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:00:13.600787 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:00:13.602213 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:00:13.610377 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:00:13.610430 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:00:13.620873 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:00:13.620914 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:00:13.629533 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:00:13.629588 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:00:13.643308 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:00:13.643361 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:00:13.659504 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:00:13.659563 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:00:13.673369 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:00:13.686805 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:00:13.686865 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:00:13.697496 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:00:13.697541 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:00:13.707450 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:00:13.707555 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:00:13.716376 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:00:13.716455 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:00:13.727573 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:00:13.727663 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:00:13.737339 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:00:13.737446 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:00:13.746404 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:00:13.770402 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:00:13.980476 systemd[1]: Switching root. Jan 17 00:00:14.009699 systemd-journald[218]: Journal stopped Jan 17 00:00:21.775161 systemd-journald[218]: Received SIGTERM from PID 1 (systemd). Jan 17 00:00:21.775203 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:00:21.775215 kernel: SELinux: policy capability open_perms=1 Jan 17 00:00:21.775226 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:00:21.775234 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:00:21.775242 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:00:21.775252 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:00:21.775260 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:00:21.775269 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:00:21.775277 kernel: audit: type=1403 audit(1768608015.578:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:00:21.775288 systemd[1]: Successfully loaded SELinux policy in 170.856ms. Jan 17 00:00:21.775299 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.685ms. Jan 17 00:00:21.775309 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:00:21.775319 systemd[1]: Detected virtualization microsoft. Jan 17 00:00:21.775329 systemd[1]: Detected architecture arm64. Jan 17 00:00:21.775340 systemd[1]: Detected first boot. Jan 17 00:00:21.775350 systemd[1]: Hostname set to . Jan 17 00:00:21.775360 systemd[1]: Initializing machine ID from random generator. Jan 17 00:00:21.775369 zram_generator::config[1211]: No configuration found. Jan 17 00:00:21.775382 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:00:21.775392 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:00:21.775403 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 17 00:00:21.775414 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:00:21.775424 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:00:21.775433 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:00:21.775443 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:00:21.775453 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:00:21.775463 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:00:21.775475 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:00:21.775484 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:00:21.775494 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:00:21.775504 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:00:21.775514 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:00:21.775524 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:00:21.775534 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:00:21.775544 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:00:21.775554 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 17 00:00:21.775565 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:00:21.775575 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:00:21.775585 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:00:21.775598 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:00:21.775608 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:00:21.775619 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:00:21.775629 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:00:21.775640 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:00:21.775650 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:00:21.775660 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:00:21.775670 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:00:21.775681 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:00:21.775691 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:00:21.775701 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:00:21.775713 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:00:21.775723 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:00:21.775733 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:00:21.775743 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:00:21.775754 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:00:21.775764 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:00:21.775776 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:00:21.775786 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:00:21.775797 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:00:21.775807 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:00:21.775819 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:00:21.775830 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:00:21.775840 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:00:21.775850 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:00:21.775860 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:00:21.775872 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:00:21.775883 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 17 00:00:21.775894 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 17 00:00:21.775904 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:00:21.775914 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:00:21.775924 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:00:21.775949 systemd-journald[1304]: Collecting audit messages is disabled. Jan 17 00:00:21.775972 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:00:21.775983 systemd-journald[1304]: Journal started Jan 17 00:00:21.776003 systemd-journald[1304]: Runtime Journal (/run/log/journal/62135ca0f4954cc7a10cc2cbf2596c2c) is 8.0M, max 78.5M, 70.5M free. Jan 17 00:00:21.794293 kernel: loop: module loaded Jan 17 00:00:21.814093 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:00:21.818193 kernel: fuse: init (API version 7.39) Jan 17 00:00:21.829891 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:00:21.833254 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:00:21.839245 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:00:21.854847 kernel: ACPI: bus type drm_connector registered Jan 17 00:00:21.850663 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:00:21.855663 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:00:21.860771 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:00:21.865907 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:00:21.870570 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:00:21.876525 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:00:21.876762 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:00:21.882622 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:00:21.882861 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:00:21.888285 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:00:21.888496 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:00:21.893640 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:00:21.893852 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:00:21.900314 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:00:21.900526 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:00:21.906252 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:00:21.906542 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:00:21.912899 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:00:21.918745 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:00:21.925498 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:00:21.932158 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:00:21.946429 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:00:21.958300 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:00:21.964735 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:00:21.969978 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:00:22.109333 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:00:22.117030 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:00:22.123705 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:00:22.125342 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:00:22.130614 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:00:22.132953 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:00:22.146368 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:00:22.160272 systemd-journald[1304]: Time spent on flushing to /var/log/journal/62135ca0f4954cc7a10cc2cbf2596c2c is 278.470ms for 880 entries. Jan 17 00:00:22.160272 systemd-journald[1304]: System Journal (/var/log/journal/62135ca0f4954cc7a10cc2cbf2596c2c) is 11.8M, max 2.6G, 2.6G free. Jan 17 00:00:22.977236 systemd-journald[1304]: Received client request to flush runtime journal. Jan 17 00:00:22.977294 systemd-journald[1304]: /var/log/journal/62135ca0f4954cc7a10cc2cbf2596c2c/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jan 17 00:00:22.977319 systemd-journald[1304]: Rotating system journal. Jan 17 00:00:22.169426 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:00:22.176670 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:00:22.182642 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:00:22.188434 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:00:22.194745 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:00:22.216190 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:00:22.225759 udevadm[1371]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 00:00:22.972119 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:00:22.981636 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:00:23.033982 systemd-tmpfiles[1368]: ACLs are not supported, ignoring. Jan 17 00:00:23.033998 systemd-tmpfiles[1368]: ACLs are not supported, ignoring. Jan 17 00:00:23.037947 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:00:23.048324 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:00:23.630561 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:00:23.643325 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:00:23.656650 systemd-tmpfiles[1391]: ACLs are not supported, ignoring. Jan 17 00:00:23.656665 systemd-tmpfiles[1391]: ACLs are not supported, ignoring. Jan 17 00:00:23.661661 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:00:24.381331 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:00:24.392295 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:00:24.412042 systemd-udevd[1397]: Using default interface naming scheme 'v255'. Jan 17 00:00:24.699551 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:00:24.719775 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:00:24.760281 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jan 17 00:00:25.021250 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#10 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 17 00:00:25.039301 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:00:25.377109 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:00:25.392195 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:00:25.397390 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:00:25.414518 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:00:25.414775 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:00:25.429236 kernel: hv_vmbus: registering driver hv_balloon Jan 17 00:00:25.429287 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 17 00:00:25.429753 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:00:25.437299 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 17 00:00:25.475152 kernel: hv_vmbus: registering driver hyperv_fb Jan 17 00:00:25.475238 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 17 00:00:25.480333 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 17 00:00:25.484356 kernel: Console: switching to colour dummy device 80x25 Jan 17 00:00:25.487343 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 00:00:25.501526 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:00:25.501757 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:00:25.514373 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:00:25.576118 systemd-networkd[1412]: lo: Link UP Jan 17 00:00:25.576440 systemd-networkd[1412]: lo: Gained carrier Jan 17 00:00:25.578397 systemd-networkd[1412]: Enumeration completed Jan 17 00:00:25.578595 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:00:25.578876 systemd-networkd[1412]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:00:25.578932 systemd-networkd[1412]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:00:25.590316 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:00:25.634189 kernel: mlx5_core e6fd:00:02.0 enP59133s1: Link up Jan 17 00:00:25.657969 systemd-networkd[1412]: enP59133s1: Link UP Jan 17 00:00:25.658066 systemd-networkd[1412]: eth0: Link UP Jan 17 00:00:25.658073 systemd-networkd[1412]: eth0: Gained carrier Jan 17 00:00:25.658103 systemd-networkd[1412]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:00:25.658180 kernel: hv_netvsc 7ced8dd4-7067-7ced-8dd4-70677ced8dd4 eth0: Data path switched to VF: enP59133s1 Jan 17 00:00:25.667392 systemd-networkd[1412]: enP59133s1: Gained carrier Jan 17 00:00:25.677198 systemd-networkd[1412]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 17 00:00:26.299191 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1399) Jan 17 00:00:26.350515 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 17 00:00:26.672291 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:00:26.682322 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:00:26.876188 lvm[1493]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:00:27.117640 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:00:27.124019 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:00:27.134281 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:00:27.143098 lvm[1497]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:00:27.169321 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:00:27.175624 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:00:27.181843 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:00:27.181948 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:00:27.186595 systemd[1]: Reached target machines.target - Containers. Jan 17 00:00:27.191919 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:00:27.202305 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:00:27.208575 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:00:27.213547 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:00:27.215308 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:00:27.222082 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:00:27.229316 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:00:27.235287 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:00:27.418611 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:00:27.684406 systemd-networkd[1412]: eth0: Gained IPv6LL Jan 17 00:00:27.691008 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:00:27.868599 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:00:27.877431 kernel: loop0: detected capacity change from 0 to 114432 Jan 17 00:00:29.303686 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:00:29.304723 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:00:30.110277 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:00:30.146195 kernel: loop1: detected capacity change from 0 to 31320 Jan 17 00:00:31.579190 kernel: loop2: detected capacity change from 0 to 114328 Jan 17 00:00:31.938209 kernel: loop3: detected capacity change from 0 to 207008 Jan 17 00:00:31.980190 kernel: loop4: detected capacity change from 0 to 114432 Jan 17 00:00:32.023208 kernel: loop5: detected capacity change from 0 to 31320 Jan 17 00:00:32.035211 kernel: loop6: detected capacity change from 0 to 114328 Jan 17 00:00:32.047191 kernel: loop7: detected capacity change from 0 to 207008 Jan 17 00:00:32.058755 (sd-merge)[1523]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 17 00:00:32.059181 (sd-merge)[1523]: Merged extensions into '/usr'. Jan 17 00:00:32.073242 systemd[1]: Reloading requested from client PID 1507 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:00:32.073255 systemd[1]: Reloading... Jan 17 00:00:32.130199 zram_generator::config[1547]: No configuration found. Jan 17 00:00:32.259585 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:00:32.329848 systemd[1]: Reloading finished in 256 ms. Jan 17 00:00:32.344704 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:00:32.357311 systemd[1]: Starting ensure-sysext.service... Jan 17 00:00:32.362126 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:00:32.371601 systemd[1]: Reloading requested from client PID 1611 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:00:32.372737 systemd[1]: Reloading... Jan 17 00:00:32.405083 systemd-tmpfiles[1612]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:00:32.405423 systemd-tmpfiles[1612]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:00:32.406109 systemd-tmpfiles[1612]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:00:32.407421 systemd-tmpfiles[1612]: ACLs are not supported, ignoring. Jan 17 00:00:32.407566 systemd-tmpfiles[1612]: ACLs are not supported, ignoring. Jan 17 00:00:32.410978 systemd-tmpfiles[1612]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:00:32.411080 systemd-tmpfiles[1612]: Skipping /boot Jan 17 00:00:32.420798 systemd-tmpfiles[1612]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:00:32.420899 systemd-tmpfiles[1612]: Skipping /boot Jan 17 00:00:32.450195 zram_generator::config[1650]: No configuration found. Jan 17 00:00:32.560566 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:00:32.633825 systemd[1]: Reloading finished in 260 ms. Jan 17 00:00:32.650110 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:00:32.663891 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:00:32.670662 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:00:32.685386 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:00:32.693345 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:00:32.702295 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:00:32.711305 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:00:32.714394 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:00:32.720452 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:00:32.732677 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:00:32.742890 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:00:32.743801 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:00:32.743950 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:00:32.752726 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:00:32.752866 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:00:32.760453 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:00:32.760709 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:00:32.773466 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:00:32.784600 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:00:32.789664 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:00:32.798417 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:00:32.814114 systemd-resolved[1710]: Positive Trust Anchors: Jan 17 00:00:32.814134 systemd-resolved[1710]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:00:32.814165 systemd-resolved[1710]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:00:32.817401 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:00:32.822667 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:00:32.826621 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:00:32.833430 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:00:32.833594 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:00:32.834834 systemd-resolved[1710]: Using system hostname 'ci-4081.3.6-n-e1db9b2d97'. Jan 17 00:00:32.839149 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:00:32.844958 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:00:32.845122 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:00:32.851448 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:00:32.851649 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:00:32.858510 augenrules[1745]: No rules Jan 17 00:00:32.860525 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:00:32.875091 systemd[1]: Reached target network.target - Network. Jan 17 00:00:32.879453 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:00:32.884417 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:00:32.890085 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:00:32.896321 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:00:32.902329 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:00:32.911406 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:00:32.918725 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:00:32.923807 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:00:32.924003 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:00:32.929701 systemd[1]: Finished ensure-sysext.service. Jan 17 00:00:32.934118 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:00:32.934443 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:00:32.939816 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:00:32.940057 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:00:32.946066 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:00:32.946669 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:00:32.952676 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:00:32.953017 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:00:32.960540 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:00:32.961301 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:00:33.315829 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:00:33.322353 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:00:36.083198 ldconfig[1504]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:00:36.096204 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:00:36.107280 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:00:36.119961 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:00:36.125195 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:00:36.129932 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:00:36.135329 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:00:36.140903 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:00:36.145590 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:00:36.150985 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:00:36.156690 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:00:36.156730 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:00:36.160742 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:00:36.167255 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:00:36.173595 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:00:36.178958 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:00:36.183851 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:00:36.188488 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:00:36.192551 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:00:36.196761 systemd[1]: System is tainted: cgroupsv1 Jan 17 00:00:36.196797 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:00:36.196819 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:00:36.199288 systemd[1]: Starting chronyd.service - NTP client/server... Jan 17 00:00:36.206283 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:00:36.214324 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 00:00:36.224056 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:00:36.240581 (chronyd)[1782]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 17 00:00:36.241365 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:00:36.246838 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:00:36.253436 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:00:36.253478 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 17 00:00:36.254899 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 17 00:00:36.260513 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 17 00:00:36.264280 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:00:36.264502 KVP[1791]: KVP starting; pid is:1791 Jan 17 00:00:36.270048 chronyd[1795]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 17 00:00:36.270693 jq[1789]: false Jan 17 00:00:36.276667 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:00:36.286361 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:00:36.298379 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:00:36.301052 extend-filesystems[1790]: Found loop4 Jan 17 00:00:36.308776 extend-filesystems[1790]: Found loop5 Jan 17 00:00:36.308776 extend-filesystems[1790]: Found loop6 Jan 17 00:00:36.308776 extend-filesystems[1790]: Found loop7 Jan 17 00:00:36.308776 extend-filesystems[1790]: Found sda Jan 17 00:00:36.308776 extend-filesystems[1790]: Found sda1 Jan 17 00:00:36.308776 extend-filesystems[1790]: Found sda2 Jan 17 00:00:36.308776 extend-filesystems[1790]: Found sda3 Jan 17 00:00:36.308776 extend-filesystems[1790]: Found usr Jan 17 00:00:36.308776 extend-filesystems[1790]: Found sda4 Jan 17 00:00:36.308776 extend-filesystems[1790]: Found sda6 Jan 17 00:00:36.308776 extend-filesystems[1790]: Found sda7 Jan 17 00:00:36.308776 extend-filesystems[1790]: Found sda9 Jan 17 00:00:36.308776 extend-filesystems[1790]: Checking size of /dev/sda9 Jan 17 00:00:36.455224 kernel: hv_utils: KVP IC version 4.0 Jan 17 00:00:36.455370 extend-filesystems[1790]: Old size kept for /dev/sda9 Jan 17 00:00:36.455370 extend-filesystems[1790]: Found sr0 Jan 17 00:00:36.311933 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:00:36.312233 chronyd[1795]: Timezone right/UTC failed leap second check, ignoring Jan 17 00:00:36.501579 coreos-metadata[1784]: Jan 17 00:00:36.473 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 17 00:00:36.501579 coreos-metadata[1784]: Jan 17 00:00:36.478 INFO Fetch successful Jan 17 00:00:36.501579 coreos-metadata[1784]: Jan 17 00:00:36.478 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 17 00:00:36.501579 coreos-metadata[1784]: Jan 17 00:00:36.484 INFO Fetch successful Jan 17 00:00:36.501579 coreos-metadata[1784]: Jan 17 00:00:36.485 INFO Fetching http://168.63.129.16/machine/5c07ceb4-b8f3-483e-8afc-2bb82a9896c5/f2de78c4%2D6330%2D4958%2D8ff1%2D67f307acbe89.%5Fci%2D4081.3.6%2Dn%2De1db9b2d97?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 17 00:00:36.501579 coreos-metadata[1784]: Jan 17 00:00:36.487 INFO Fetch successful Jan 17 00:00:36.501579 coreos-metadata[1784]: Jan 17 00:00:36.487 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 17 00:00:36.501579 coreos-metadata[1784]: Jan 17 00:00:36.499 INFO Fetch successful Jan 17 00:00:36.339352 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:00:36.312412 chronyd[1795]: Loaded seccomp filter (level 2) Jan 17 00:00:36.350343 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:00:36.373647 dbus-daemon[1788]: [system] SELinux support is enabled Jan 17 00:00:36.502306 update_engine[1824]: I20260117 00:00:36.480748 1824 main.cc:92] Flatcar Update Engine starting Jan 17 00:00:36.502306 update_engine[1824]: I20260117 00:00:36.492548 1824 update_check_scheduler.cc:74] Next update check in 7m16s Jan 17 00:00:36.359597 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:00:36.378724 KVP[1791]: KVP LIC Version: 3.1 Jan 17 00:00:36.369307 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:00:36.502805 jq[1829]: true Jan 17 00:00:36.391268 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:00:36.417735 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:00:36.435655 systemd[1]: Started chronyd.service - NTP client/server. Jan 17 00:00:36.455588 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:00:36.455819 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:00:36.456049 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:00:36.460337 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:00:36.477165 systemd-logind[1817]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 00:00:36.480321 systemd-logind[1817]: New seat seat0. Jan 17 00:00:36.491408 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:00:36.506368 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:00:36.506582 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:00:36.516122 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:00:36.542047 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:00:36.557238 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1842) Jan 17 00:00:36.560290 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:00:36.588011 (ntainerd)[1868]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:00:36.605895 jq[1866]: true Jan 17 00:00:36.608180 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 00:00:36.652013 dbus-daemon[1788]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 00:00:36.658347 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:00:36.662836 tar[1858]: linux-arm64/LICENSE Jan 17 00:00:36.663124 tar[1858]: linux-arm64/helm Jan 17 00:00:36.671869 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:00:36.672040 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:00:36.672154 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:00:36.679399 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:00:36.679520 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:00:36.689038 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:00:36.703659 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:00:36.806978 bash[1916]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:00:36.809627 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:00:36.826058 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 00:00:37.034743 sshd_keygen[1822]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:00:37.036928 locksmithd[1917]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:00:37.069555 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:00:37.082439 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:00:37.093137 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 17 00:00:37.105509 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:00:37.105754 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:00:37.129668 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:00:37.145339 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 17 00:00:37.168606 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:00:37.182484 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:00:37.200522 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 17 00:00:37.213616 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:00:37.269710 containerd[1868]: time="2026-01-17T00:00:37.269619040Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:00:37.306552 containerd[1868]: time="2026-01-17T00:00:37.306514200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:00:37.310666 containerd[1868]: time="2026-01-17T00:00:37.310625880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:00:37.310783 containerd[1868]: time="2026-01-17T00:00:37.310769280Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:00:37.313185 containerd[1868]: time="2026-01-17T00:00:37.311360720Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:00:37.313185 containerd[1868]: time="2026-01-17T00:00:37.311529760Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:00:37.313185 containerd[1868]: time="2026-01-17T00:00:37.311547840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:00:37.313185 containerd[1868]: time="2026-01-17T00:00:37.311605800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:00:37.313185 containerd[1868]: time="2026-01-17T00:00:37.311619600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:00:37.313185 containerd[1868]: time="2026-01-17T00:00:37.311835320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:00:37.313185 containerd[1868]: time="2026-01-17T00:00:37.311853520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:00:37.313185 containerd[1868]: time="2026-01-17T00:00:37.311866240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:00:37.313185 containerd[1868]: time="2026-01-17T00:00:37.311877440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:00:37.313185 containerd[1868]: time="2026-01-17T00:00:37.311946960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:00:37.313185 containerd[1868]: time="2026-01-17T00:00:37.312119440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:00:37.313941 containerd[1868]: time="2026-01-17T00:00:37.313917400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:00:37.314012 containerd[1868]: time="2026-01-17T00:00:37.313998840Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:00:37.314265 containerd[1868]: time="2026-01-17T00:00:37.314246840Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:00:37.315142 containerd[1868]: time="2026-01-17T00:00:37.315123560Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:00:37.336413 containerd[1868]: time="2026-01-17T00:00:37.336216520Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:00:37.336413 containerd[1868]: time="2026-01-17T00:00:37.336276600Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:00:37.336413 containerd[1868]: time="2026-01-17T00:00:37.336292800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:00:37.336413 containerd[1868]: time="2026-01-17T00:00:37.336328840Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:00:37.336413 containerd[1868]: time="2026-01-17T00:00:37.336342920Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:00:37.338029 containerd[1868]: time="2026-01-17T00:00:37.338006960Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:00:37.338556 tar[1858]: linux-arm64/README.md Jan 17 00:00:37.339351 containerd[1868]: time="2026-01-17T00:00:37.339312600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:00:37.339511 containerd[1868]: time="2026-01-17T00:00:37.339483920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:00:37.339550 containerd[1868]: time="2026-01-17T00:00:37.339513200Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:00:37.339550 containerd[1868]: time="2026-01-17T00:00:37.339531400Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:00:37.339591 containerd[1868]: time="2026-01-17T00:00:37.339549320Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:00:37.339591 containerd[1868]: time="2026-01-17T00:00:37.339566640Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:00:37.339591 containerd[1868]: time="2026-01-17T00:00:37.339584640Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:00:37.339652 containerd[1868]: time="2026-01-17T00:00:37.339603400Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:00:37.339652 containerd[1868]: time="2026-01-17T00:00:37.339621920Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:00:37.339652 containerd[1868]: time="2026-01-17T00:00:37.339639800Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:00:37.339743 containerd[1868]: time="2026-01-17T00:00:37.339655640Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:00:37.339743 containerd[1868]: time="2026-01-17T00:00:37.339672240Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:00:37.339743 containerd[1868]: time="2026-01-17T00:00:37.339696560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:00:37.339743 containerd[1868]: time="2026-01-17T00:00:37.339715760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:00:37.339743 containerd[1868]: time="2026-01-17T00:00:37.339732240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:00:37.339837 containerd[1868]: time="2026-01-17T00:00:37.339749840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:00:37.339837 containerd[1868]: time="2026-01-17T00:00:37.339763880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:00:37.339837 containerd[1868]: time="2026-01-17T00:00:37.339780960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:00:37.339837 containerd[1868]: time="2026-01-17T00:00:37.339797160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:00:37.339837 containerd[1868]: time="2026-01-17T00:00:37.339813160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:00:37.339837 containerd[1868]: time="2026-01-17T00:00:37.339832080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:00:37.339947 containerd[1868]: time="2026-01-17T00:00:37.339852120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:00:37.339947 containerd[1868]: time="2026-01-17T00:00:37.339868040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:00:37.339947 containerd[1868]: time="2026-01-17T00:00:37.339880640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:00:37.339947 containerd[1868]: time="2026-01-17T00:00:37.339897640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:00:37.339947 containerd[1868]: time="2026-01-17T00:00:37.339918000Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:00:37.339947 containerd[1868]: time="2026-01-17T00:00:37.339942680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:00:37.340060 containerd[1868]: time="2026-01-17T00:00:37.339959880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:00:37.340060 containerd[1868]: time="2026-01-17T00:00:37.339975240Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:00:37.340060 containerd[1868]: time="2026-01-17T00:00:37.340034680Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:00:37.340114 containerd[1868]: time="2026-01-17T00:00:37.340056120Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:00:37.340114 containerd[1868]: time="2026-01-17T00:00:37.340068320Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:00:37.340114 containerd[1868]: time="2026-01-17T00:00:37.340085960Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:00:37.340114 containerd[1868]: time="2026-01-17T00:00:37.340100080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:00:37.340979 containerd[1868]: time="2026-01-17T00:00:37.340116080Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:00:37.340979 containerd[1868]: time="2026-01-17T00:00:37.340127400Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:00:37.340979 containerd[1868]: time="2026-01-17T00:00:37.340141680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:00:37.341043 containerd[1868]: time="2026-01-17T00:00:37.340454680Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:00:37.341043 containerd[1868]: time="2026-01-17T00:00:37.340518640Z" level=info msg="Connect containerd service" Jan 17 00:00:37.341043 containerd[1868]: time="2026-01-17T00:00:37.340565120Z" level=info msg="using legacy CRI server" Jan 17 00:00:37.341043 containerd[1868]: time="2026-01-17T00:00:37.340572400Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:00:37.341043 containerd[1868]: time="2026-01-17T00:00:37.340674200Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:00:37.344280 containerd[1868]: time="2026-01-17T00:00:37.344250960Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:00:37.347306 containerd[1868]: time="2026-01-17T00:00:37.347264880Z" level=info msg="Start subscribing containerd event" Jan 17 00:00:37.347376 containerd[1868]: time="2026-01-17T00:00:37.347328600Z" level=info msg="Start recovering state" Jan 17 00:00:37.350186 containerd[1868]: time="2026-01-17T00:00:37.348381360Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:00:37.350186 containerd[1868]: time="2026-01-17T00:00:37.348429960Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:00:37.350186 containerd[1868]: time="2026-01-17T00:00:37.348454400Z" level=info msg="Start event monitor" Jan 17 00:00:37.350186 containerd[1868]: time="2026-01-17T00:00:37.348464880Z" level=info msg="Start snapshots syncer" Jan 17 00:00:37.350186 containerd[1868]: time="2026-01-17T00:00:37.348473480Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:00:37.350186 containerd[1868]: time="2026-01-17T00:00:37.348481000Z" level=info msg="Start streaming server" Jan 17 00:00:37.350186 containerd[1868]: time="2026-01-17T00:00:37.348531520Z" level=info msg="containerd successfully booted in 0.082994s" Jan 17 00:00:37.355634 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:00:37.368111 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:00:37.569372 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:00:37.575216 (kubelet)[1976]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:00:37.576632 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:00:37.581889 systemd[1]: Startup finished in 12.588s (kernel) + 22.172s (userspace) = 34.760s. Jan 17 00:00:37.916554 login[1956]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:00:37.920125 login[1957]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:00:37.931254 systemd-logind[1817]: New session 2 of user core. Jan 17 00:00:37.932920 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:00:37.939755 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:00:37.943645 systemd-logind[1817]: New session 1 of user core. Jan 17 00:00:37.972877 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:00:37.983038 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:00:37.988942 (systemd)[1990]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:00:38.010692 kubelet[1976]: E0117 00:00:38.010657 1976 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:00:38.013940 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:00:38.014140 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:00:38.159834 systemd[1990]: Queued start job for default target default.target. Jan 17 00:00:38.160182 systemd[1990]: Created slice app.slice - User Application Slice. Jan 17 00:00:38.160200 systemd[1990]: Reached target paths.target - Paths. Jan 17 00:00:38.160211 systemd[1990]: Reached target timers.target - Timers. Jan 17 00:00:38.167236 systemd[1990]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:00:38.174284 systemd[1990]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:00:38.174907 systemd[1990]: Reached target sockets.target - Sockets. Jan 17 00:00:38.174925 systemd[1990]: Reached target basic.target - Basic System. Jan 17 00:00:38.174964 systemd[1990]: Reached target default.target - Main User Target. Jan 17 00:00:38.174986 systemd[1990]: Startup finished in 180ms. Jan 17 00:00:38.175410 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:00:38.184729 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:00:38.185494 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:00:38.789324 waagent[1952]: 2026-01-17T00:00:38.788863Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 17 00:00:38.794115 waagent[1952]: 2026-01-17T00:00:38.794053Z INFO Daemon Daemon OS: flatcar 4081.3.6 Jan 17 00:00:38.798210 waagent[1952]: 2026-01-17T00:00:38.798149Z INFO Daemon Daemon Python: 3.11.9 Jan 17 00:00:38.802092 waagent[1952]: 2026-01-17T00:00:38.802038Z INFO Daemon Daemon Run daemon Jan 17 00:00:38.805660 waagent[1952]: 2026-01-17T00:00:38.805614Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Jan 17 00:00:38.813245 waagent[1952]: 2026-01-17T00:00:38.813158Z INFO Daemon Daemon Using waagent for provisioning Jan 17 00:00:38.817651 waagent[1952]: 2026-01-17T00:00:38.817607Z INFO Daemon Daemon Activate resource disk Jan 17 00:00:38.821422 waagent[1952]: 2026-01-17T00:00:38.821379Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 17 00:00:38.830928 waagent[1952]: 2026-01-17T00:00:38.830878Z INFO Daemon Daemon Found device: None Jan 17 00:00:38.834614 waagent[1952]: 2026-01-17T00:00:38.834573Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 17 00:00:38.841400 waagent[1952]: 2026-01-17T00:00:38.841362Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 17 00:00:38.852333 waagent[1952]: 2026-01-17T00:00:38.852280Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 17 00:00:38.857036 waagent[1952]: 2026-01-17T00:00:38.856994Z INFO Daemon Daemon Running default provisioning handler Jan 17 00:00:38.867622 waagent[1952]: 2026-01-17T00:00:38.867569Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 17 00:00:38.878921 waagent[1952]: 2026-01-17T00:00:38.878867Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 17 00:00:38.886704 waagent[1952]: 2026-01-17T00:00:38.886661Z INFO Daemon Daemon cloud-init is enabled: False Jan 17 00:00:38.891127 waagent[1952]: 2026-01-17T00:00:38.891091Z INFO Daemon Daemon Copying ovf-env.xml Jan 17 00:00:38.971208 waagent[1952]: 2026-01-17T00:00:38.971099Z INFO Daemon Daemon Successfully mounted dvd Jan 17 00:00:38.986513 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 17 00:00:38.986760 waagent[1952]: 2026-01-17T00:00:38.986695Z INFO Daemon Daemon Detect protocol endpoint Jan 17 00:00:38.990791 waagent[1952]: 2026-01-17T00:00:38.990734Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 17 00:00:38.995409 waagent[1952]: 2026-01-17T00:00:38.995364Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 17 00:00:39.000693 waagent[1952]: 2026-01-17T00:00:39.000655Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 17 00:00:39.005238 waagent[1952]: 2026-01-17T00:00:39.005195Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 17 00:00:39.009372 waagent[1952]: 2026-01-17T00:00:39.009334Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 17 00:00:39.073630 waagent[1952]: 2026-01-17T00:00:39.073545Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 17 00:00:39.079054 waagent[1952]: 2026-01-17T00:00:39.079024Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 17 00:00:39.083466 waagent[1952]: 2026-01-17T00:00:39.083428Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 17 00:00:39.392257 waagent[1952]: 2026-01-17T00:00:39.391655Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 17 00:00:39.397207 waagent[1952]: 2026-01-17T00:00:39.397145Z INFO Daemon Daemon Forcing an update of the goal state. Jan 17 00:00:39.405349 waagent[1952]: 2026-01-17T00:00:39.405302Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 17 00:00:39.424559 waagent[1952]: 2026-01-17T00:00:39.424513Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 17 00:00:39.429241 waagent[1952]: 2026-01-17T00:00:39.429199Z INFO Daemon Jan 17 00:00:39.431552 waagent[1952]: 2026-01-17T00:00:39.431511Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 36ca2fcf-2baf-47c8-ad8b-0f8f5fea1c90 eTag: 7530349989897770043 source: Fabric] Jan 17 00:00:39.440572 waagent[1952]: 2026-01-17T00:00:39.440533Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 17 00:00:39.446289 waagent[1952]: 2026-01-17T00:00:39.446250Z INFO Daemon Jan 17 00:00:39.448518 waagent[1952]: 2026-01-17T00:00:39.448478Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 17 00:00:39.457510 waagent[1952]: 2026-01-17T00:00:39.457478Z INFO Daemon Daemon Downloading artifacts profile blob Jan 17 00:00:39.528927 waagent[1952]: 2026-01-17T00:00:39.528866Z INFO Daemon Downloaded certificate {'thumbprint': '625A20A31B93060F532103968457E53B2569A52F', 'hasPrivateKey': True} Jan 17 00:00:39.536835 waagent[1952]: 2026-01-17T00:00:39.536795Z INFO Daemon Fetch goal state completed Jan 17 00:00:39.546439 waagent[1952]: 2026-01-17T00:00:39.546405Z INFO Daemon Daemon Starting provisioning Jan 17 00:00:39.550440 waagent[1952]: 2026-01-17T00:00:39.550400Z INFO Daemon Daemon Handle ovf-env.xml. Jan 17 00:00:39.554209 waagent[1952]: 2026-01-17T00:00:39.554175Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-e1db9b2d97] Jan 17 00:00:39.579020 waagent[1952]: 2026-01-17T00:00:39.574258Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-e1db9b2d97] Jan 17 00:00:39.579373 waagent[1952]: 2026-01-17T00:00:39.579330Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 17 00:00:39.584255 waagent[1952]: 2026-01-17T00:00:39.584217Z INFO Daemon Daemon Primary interface is [eth0] Jan 17 00:00:39.624544 systemd-networkd[1412]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:00:39.624550 systemd-networkd[1412]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:00:39.624573 systemd-networkd[1412]: eth0: DHCP lease lost Jan 17 00:00:39.625616 waagent[1952]: 2026-01-17T00:00:39.625547Z INFO Daemon Daemon Create user account if not exists Jan 17 00:00:39.630275 waagent[1952]: 2026-01-17T00:00:39.630210Z INFO Daemon Daemon User core already exists, skip useradd Jan 17 00:00:39.633258 systemd-networkd[1412]: eth0: DHCPv6 lease lost Jan 17 00:00:39.635537 waagent[1952]: 2026-01-17T00:00:39.635489Z INFO Daemon Daemon Configure sudoer Jan 17 00:00:39.639381 waagent[1952]: 2026-01-17T00:00:39.639334Z INFO Daemon Daemon Configure sshd Jan 17 00:00:39.642878 waagent[1952]: 2026-01-17T00:00:39.642800Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 17 00:00:39.652616 waagent[1952]: 2026-01-17T00:00:39.652572Z INFO Daemon Daemon Deploy ssh public key. Jan 17 00:00:39.661215 systemd-networkd[1412]: eth0: DHCPv4 address 10.200.20.34/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 17 00:00:40.747901 waagent[1952]: 2026-01-17T00:00:40.744159Z INFO Daemon Daemon Provisioning complete Jan 17 00:00:40.760644 waagent[1952]: 2026-01-17T00:00:40.760602Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 17 00:00:40.765773 waagent[1952]: 2026-01-17T00:00:40.765729Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 17 00:00:40.773625 waagent[1952]: 2026-01-17T00:00:40.773580Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 17 00:00:40.895959 waagent[2046]: 2026-01-17T00:00:40.895888Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 17 00:00:40.896270 waagent[2046]: 2026-01-17T00:00:40.896031Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Jan 17 00:00:40.896270 waagent[2046]: 2026-01-17T00:00:40.896083Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 17 00:00:40.947189 waagent[2046]: 2026-01-17T00:00:40.945143Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 17 00:00:40.947189 waagent[2046]: 2026-01-17T00:00:40.945389Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 17 00:00:40.947189 waagent[2046]: 2026-01-17T00:00:40.945448Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 17 00:00:40.954054 waagent[2046]: 2026-01-17T00:00:40.953977Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 17 00:00:40.959876 waagent[2046]: 2026-01-17T00:00:40.959839Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 17 00:00:40.960427 waagent[2046]: 2026-01-17T00:00:40.960389Z INFO ExtHandler Jan 17 00:00:40.960566 waagent[2046]: 2026-01-17T00:00:40.960536Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 8dc68657-575a-4e08-929d-a0942c479db6 eTag: 7530349989897770043 source: Fabric] Jan 17 00:00:40.960921 waagent[2046]: 2026-01-17T00:00:40.960885Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 17 00:00:40.961580 waagent[2046]: 2026-01-17T00:00:40.961537Z INFO ExtHandler Jan 17 00:00:40.961714 waagent[2046]: 2026-01-17T00:00:40.961683Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 17 00:00:40.965553 waagent[2046]: 2026-01-17T00:00:40.965524Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 17 00:00:41.043831 waagent[2046]: 2026-01-17T00:00:41.043749Z INFO ExtHandler Downloaded certificate {'thumbprint': '625A20A31B93060F532103968457E53B2569A52F', 'hasPrivateKey': True} Jan 17 00:00:41.044339 waagent[2046]: 2026-01-17T00:00:41.044298Z INFO ExtHandler Fetch goal state completed Jan 17 00:00:41.058943 waagent[2046]: 2026-01-17T00:00:41.058895Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2046 Jan 17 00:00:41.059086 waagent[2046]: 2026-01-17T00:00:41.059054Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 17 00:00:41.060601 waagent[2046]: 2026-01-17T00:00:41.060562Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Jan 17 00:00:41.060949 waagent[2046]: 2026-01-17T00:00:41.060915Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 17 00:00:41.086310 waagent[2046]: 2026-01-17T00:00:41.086273Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 17 00:00:41.086500 waagent[2046]: 2026-01-17T00:00:41.086466Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 17 00:00:41.092538 waagent[2046]: 2026-01-17T00:00:41.092501Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 17 00:00:41.098923 systemd[1]: Reloading requested from client PID 2059 ('systemctl') (unit waagent.service)... Jan 17 00:00:41.098934 systemd[1]: Reloading... Jan 17 00:00:41.176195 zram_generator::config[2111]: No configuration found. Jan 17 00:00:41.257371 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:00:41.335390 systemd[1]: Reloading finished in 236 ms. Jan 17 00:00:41.357433 waagent[2046]: 2026-01-17T00:00:41.357296Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 17 00:00:41.363588 systemd[1]: Reloading requested from client PID 2152 ('systemctl') (unit waagent.service)... Jan 17 00:00:41.363601 systemd[1]: Reloading... Jan 17 00:00:41.433238 zram_generator::config[2187]: No configuration found. Jan 17 00:00:41.532245 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:00:41.605413 systemd[1]: Reloading finished in 241 ms. Jan 17 00:00:41.625782 waagent[2046]: 2026-01-17T00:00:41.625695Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 17 00:00:41.625873 waagent[2046]: 2026-01-17T00:00:41.625843Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 17 00:00:42.336308 waagent[2046]: 2026-01-17T00:00:42.336226Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 17 00:00:42.339826 waagent[2046]: 2026-01-17T00:00:42.339779Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 17 00:00:42.340607 waagent[2046]: 2026-01-17T00:00:42.340559Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 17 00:00:42.341003 waagent[2046]: 2026-01-17T00:00:42.340891Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 17 00:00:42.341328 waagent[2046]: 2026-01-17T00:00:42.341265Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 17 00:00:42.341524 waagent[2046]: 2026-01-17T00:00:42.341442Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 17 00:00:42.342611 waagent[2046]: 2026-01-17T00:00:42.341874Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 17 00:00:42.342611 waagent[2046]: 2026-01-17T00:00:42.341952Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 17 00:00:42.342611 waagent[2046]: 2026-01-17T00:00:42.342144Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 17 00:00:42.342611 waagent[2046]: 2026-01-17T00:00:42.342337Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 17 00:00:42.342611 waagent[2046]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 17 00:00:42.342611 waagent[2046]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 17 00:00:42.342611 waagent[2046]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 17 00:00:42.342611 waagent[2046]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 17 00:00:42.342611 waagent[2046]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 17 00:00:42.342611 waagent[2046]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 17 00:00:42.342960 waagent[2046]: 2026-01-17T00:00:42.342919Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 17 00:00:42.343144 waagent[2046]: 2026-01-17T00:00:42.343097Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 17 00:00:42.343218 waagent[2046]: 2026-01-17T00:00:42.343158Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 17 00:00:42.343506 waagent[2046]: 2026-01-17T00:00:42.343463Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 17 00:00:42.344264 waagent[2046]: 2026-01-17T00:00:42.344227Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 17 00:00:42.346133 waagent[2046]: 2026-01-17T00:00:42.346091Z INFO EnvHandler ExtHandler Configure routes Jan 17 00:00:42.347385 waagent[2046]: 2026-01-17T00:00:42.347338Z INFO EnvHandler ExtHandler Gateway:None Jan 17 00:00:42.347459 waagent[2046]: 2026-01-17T00:00:42.347431Z INFO EnvHandler ExtHandler Routes:None Jan 17 00:00:42.348769 waagent[2046]: 2026-01-17T00:00:42.348729Z INFO ExtHandler ExtHandler Jan 17 00:00:42.348842 waagent[2046]: 2026-01-17T00:00:42.348812Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: ceb94d0c-465b-4362-b265-32669146dbae correlation 17fd6630-1b3b-4cee-af8d-0cc31beba16f created: 2026-01-16T23:59:35.126282Z] Jan 17 00:00:42.349555 waagent[2046]: 2026-01-17T00:00:42.349487Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 17 00:00:42.350927 waagent[2046]: 2026-01-17T00:00:42.350848Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Jan 17 00:00:42.382195 waagent[2046]: 2026-01-17T00:00:42.382054Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: B3141A6E-30C0-44C6-A6B2-0024857176D6;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 17 00:00:42.382336 waagent[2046]: 2026-01-17T00:00:42.382275Z INFO MonitorHandler ExtHandler Network interfaces: Jan 17 00:00:42.382336 waagent[2046]: Executing ['ip', '-a', '-o', 'link']: Jan 17 00:00:42.382336 waagent[2046]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 17 00:00:42.382336 waagent[2046]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:d4:70:67 brd ff:ff:ff:ff:ff:ff Jan 17 00:00:42.382336 waagent[2046]: 3: enP59133s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:d4:70:67 brd ff:ff:ff:ff:ff:ff\ altname enP59133p0s2 Jan 17 00:00:42.382336 waagent[2046]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 17 00:00:42.382336 waagent[2046]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 17 00:00:42.382336 waagent[2046]: 2: eth0 inet 10.200.20.34/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 17 00:00:42.382336 waagent[2046]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 17 00:00:42.382336 waagent[2046]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 17 00:00:42.382336 waagent[2046]: 2: eth0 inet6 fe80::7eed:8dff:fed4:7067/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 17 00:00:42.458812 waagent[2046]: 2026-01-17T00:00:42.458745Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 17 00:00:42.458812 waagent[2046]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:00:42.458812 waagent[2046]: pkts bytes target prot opt in out source destination Jan 17 00:00:42.458812 waagent[2046]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:00:42.458812 waagent[2046]: pkts bytes target prot opt in out source destination Jan 17 00:00:42.458812 waagent[2046]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:00:42.458812 waagent[2046]: pkts bytes target prot opt in out source destination Jan 17 00:00:42.458812 waagent[2046]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 17 00:00:42.458812 waagent[2046]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 17 00:00:42.458812 waagent[2046]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 17 00:00:42.461646 waagent[2046]: 2026-01-17T00:00:42.461593Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 17 00:00:42.461646 waagent[2046]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:00:42.461646 waagent[2046]: pkts bytes target prot opt in out source destination Jan 17 00:00:42.461646 waagent[2046]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:00:42.461646 waagent[2046]: pkts bytes target prot opt in out source destination Jan 17 00:00:42.461646 waagent[2046]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:00:42.461646 waagent[2046]: pkts bytes target prot opt in out source destination Jan 17 00:00:42.461646 waagent[2046]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 17 00:00:42.461646 waagent[2046]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 17 00:00:42.461646 waagent[2046]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 17 00:00:42.461862 waagent[2046]: 2026-01-17T00:00:42.461825Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 17 00:00:48.025985 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:00:48.034394 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:00:48.137351 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:00:48.141191 (kubelet)[2288]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:00:48.277298 kubelet[2288]: E0117 00:00:48.277199 2288 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:00:48.282337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:00:48.282494 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:00:51.275131 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:00:51.284370 systemd[1]: Started sshd@0-10.200.20.34:22-10.200.16.10:40464.service - OpenSSH per-connection server daemon (10.200.16.10:40464). Jan 17 00:00:51.852542 sshd[2295]: Accepted publickey for core from 10.200.16.10 port 40464 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:00:51.853760 sshd[2295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:00:51.858093 systemd-logind[1817]: New session 3 of user core. Jan 17 00:00:51.867459 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:00:52.278370 systemd[1]: Started sshd@1-10.200.20.34:22-10.200.16.10:40472.service - OpenSSH per-connection server daemon (10.200.16.10:40472). Jan 17 00:00:52.759423 sshd[2300]: Accepted publickey for core from 10.200.16.10 port 40472 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:00:52.760682 sshd[2300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:00:52.764604 systemd-logind[1817]: New session 4 of user core. Jan 17 00:00:52.770503 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:00:53.104372 sshd[2300]: pam_unix(sshd:session): session closed for user core Jan 17 00:00:53.106987 systemd-logind[1817]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:00:53.107227 systemd[1]: sshd@1-10.200.20.34:22-10.200.16.10:40472.service: Deactivated successfully. Jan 17 00:00:53.109779 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:00:53.111166 systemd-logind[1817]: Removed session 4. Jan 17 00:00:53.189570 systemd[1]: Started sshd@2-10.200.20.34:22-10.200.16.10:40484.service - OpenSSH per-connection server daemon (10.200.16.10:40484). Jan 17 00:00:53.632143 sshd[2308]: Accepted publickey for core from 10.200.16.10 port 40484 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:00:53.633401 sshd[2308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:00:53.636963 systemd-logind[1817]: New session 5 of user core. Jan 17 00:00:53.646384 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:00:53.961346 sshd[2308]: pam_unix(sshd:session): session closed for user core Jan 17 00:00:53.963908 systemd-logind[1817]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:00:53.964132 systemd[1]: sshd@2-10.200.20.34:22-10.200.16.10:40484.service: Deactivated successfully. Jan 17 00:00:53.966788 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:00:53.967924 systemd-logind[1817]: Removed session 5. Jan 17 00:00:54.041368 systemd[1]: Started sshd@3-10.200.20.34:22-10.200.16.10:40496.service - OpenSSH per-connection server daemon (10.200.16.10:40496). Jan 17 00:00:54.484563 sshd[2316]: Accepted publickey for core from 10.200.16.10 port 40496 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:00:54.485871 sshd[2316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:00:54.489547 systemd-logind[1817]: New session 6 of user core. Jan 17 00:00:54.496450 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:00:54.816933 sshd[2316]: pam_unix(sshd:session): session closed for user core Jan 17 00:00:54.819456 systemd-logind[1817]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:00:54.823473 systemd[1]: sshd@3-10.200.20.34:22-10.200.16.10:40496.service: Deactivated successfully. Jan 17 00:00:54.825584 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:00:54.826852 systemd-logind[1817]: Removed session 6. Jan 17 00:00:54.902365 systemd[1]: Started sshd@4-10.200.20.34:22-10.200.16.10:40502.service - OpenSSH per-connection server daemon (10.200.16.10:40502). Jan 17 00:00:55.383803 sshd[2324]: Accepted publickey for core from 10.200.16.10 port 40502 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:00:55.385011 sshd[2324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:00:55.389780 systemd-logind[1817]: New session 7 of user core. Jan 17 00:00:55.398480 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:00:55.816055 sudo[2328]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:00:55.816339 sudo[2328]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:00:55.831197 sudo[2328]: pam_unix(sudo:session): session closed for user root Jan 17 00:00:55.909189 sshd[2324]: pam_unix(sshd:session): session closed for user core Jan 17 00:00:55.912756 systemd[1]: sshd@4-10.200.20.34:22-10.200.16.10:40502.service: Deactivated successfully. Jan 17 00:00:55.915323 systemd-logind[1817]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:00:55.915335 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:00:55.916886 systemd-logind[1817]: Removed session 7. Jan 17 00:00:56.000632 systemd[1]: Started sshd@5-10.200.20.34:22-10.200.16.10:40516.service - OpenSSH per-connection server daemon (10.200.16.10:40516). Jan 17 00:00:56.481728 sshd[2333]: Accepted publickey for core from 10.200.16.10 port 40516 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:00:56.483004 sshd[2333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:00:56.486789 systemd-logind[1817]: New session 8 of user core. Jan 17 00:00:56.496480 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:00:56.755616 sudo[2338]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:00:56.755876 sudo[2338]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:00:56.758972 sudo[2338]: pam_unix(sudo:session): session closed for user root Jan 17 00:00:56.763074 sudo[2337]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:00:56.763334 sudo[2337]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:00:56.774632 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:00:56.775706 auditctl[2341]: No rules Jan 17 00:00:56.776091 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:00:56.776326 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:00:56.779513 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:00:56.800184 augenrules[2360]: No rules Jan 17 00:00:56.802507 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:00:56.803768 sudo[2337]: pam_unix(sudo:session): session closed for user root Jan 17 00:00:56.882375 sshd[2333]: pam_unix(sshd:session): session closed for user core Jan 17 00:00:56.885052 systemd[1]: sshd@5-10.200.20.34:22-10.200.16.10:40516.service: Deactivated successfully. Jan 17 00:00:56.885167 systemd-logind[1817]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:00:56.889161 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:00:56.889941 systemd-logind[1817]: Removed session 8. Jan 17 00:00:56.964604 systemd[1]: Started sshd@6-10.200.20.34:22-10.200.16.10:40530.service - OpenSSH per-connection server daemon (10.200.16.10:40530). Jan 17 00:00:57.446834 sshd[2369]: Accepted publickey for core from 10.200.16.10 port 40530 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:00:57.448015 sshd[2369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:00:57.451943 systemd-logind[1817]: New session 9 of user core. Jan 17 00:00:57.458455 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:00:57.720536 sudo[2373]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:00:57.720792 sudo[2373]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:00:58.525948 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:00:58.535575 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:00:58.887358 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:00:58.887742 (kubelet)[2399]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:00:58.923943 kubelet[2399]: E0117 00:00:58.923878 2399 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:00:58.926092 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:00:58.926320 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:00:59.234523 (dockerd)[2408]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:00:59.234922 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:00:59.866261 dockerd[2408]: time="2026-01-17T00:00:59.866209040Z" level=info msg="Starting up" Jan 17 00:01:00.098613 chronyd[1795]: Selected source PHC0 Jan 17 00:01:00.268166 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport482546753-merged.mount: Deactivated successfully. Jan 17 00:01:00.531692 systemd[1]: var-lib-docker-metacopy\x2dcheck2409669598-merged.mount: Deactivated successfully. Jan 17 00:01:00.548395 dockerd[2408]: time="2026-01-17T00:01:00.548342222Z" level=info msg="Loading containers: start." Jan 17 00:01:00.705194 kernel: Initializing XFRM netlink socket Jan 17 00:01:00.867307 systemd-networkd[1412]: docker0: Link UP Jan 17 00:01:00.905296 dockerd[2408]: time="2026-01-17T00:01:00.905254443Z" level=info msg="Loading containers: done." Jan 17 00:01:00.932863 dockerd[2408]: time="2026-01-17T00:01:00.932481368Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:01:00.932863 dockerd[2408]: time="2026-01-17T00:01:00.932586885Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:01:00.932863 dockerd[2408]: time="2026-01-17T00:01:00.932691682Z" level=info msg="Daemon has completed initialization" Jan 17 00:01:00.995465 dockerd[2408]: time="2026-01-17T00:01:00.995380595Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:01:00.995773 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:01:01.726745 containerd[1868]: time="2026-01-17T00:01:01.726707876Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 17 00:01:02.619529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2255268707.mount: Deactivated successfully. Jan 17 00:01:03.943210 containerd[1868]: time="2026-01-17T00:01:03.942456854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:03.951816 containerd[1868]: time="2026-01-17T00:01:03.951774451Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=26441982" Jan 17 00:01:03.957111 containerd[1868]: time="2026-01-17T00:01:03.957083289Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:03.964137 containerd[1868]: time="2026-01-17T00:01:03.964094967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:03.965409 containerd[1868]: time="2026-01-17T00:01:03.965143527Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 2.238398131s" Jan 17 00:01:03.965409 containerd[1868]: time="2026-01-17T00:01:03.965190727Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 17 00:01:03.965954 containerd[1868]: time="2026-01-17T00:01:03.965929247Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 17 00:01:05.155216 containerd[1868]: time="2026-01-17T00:01:05.154411387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:05.157762 containerd[1868]: time="2026-01-17T00:01:05.157557746Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22622086" Jan 17 00:01:05.161611 containerd[1868]: time="2026-01-17T00:01:05.161537185Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:05.166737 containerd[1868]: time="2026-01-17T00:01:05.166688263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:05.168294 containerd[1868]: time="2026-01-17T00:01:05.168261543Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.202295736s" Jan 17 00:01:05.169196 containerd[1868]: time="2026-01-17T00:01:05.168395983Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 17 00:01:05.171782 containerd[1868]: time="2026-01-17T00:01:05.171753102Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 17 00:01:06.196206 containerd[1868]: time="2026-01-17T00:01:06.195829495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:06.199828 containerd[1868]: time="2026-01-17T00:01:06.199800373Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17616747" Jan 17 00:01:06.203433 containerd[1868]: time="2026-01-17T00:01:06.203398732Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:06.209686 containerd[1868]: time="2026-01-17T00:01:06.209632610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:06.210676 containerd[1868]: time="2026-01-17T00:01:06.210645770Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.038859588s" Jan 17 00:01:06.210778 containerd[1868]: time="2026-01-17T00:01:06.210763610Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 17 00:01:06.211366 containerd[1868]: time="2026-01-17T00:01:06.211345130Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 17 00:01:07.293584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2763415022.mount: Deactivated successfully. Jan 17 00:01:07.647632 containerd[1868]: time="2026-01-17T00:01:07.647503231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:07.652636 containerd[1868]: time="2026-01-17T00:01:07.652462150Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558724" Jan 17 00:01:07.656238 containerd[1868]: time="2026-01-17T00:01:07.656192948Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:07.660457 containerd[1868]: time="2026-01-17T00:01:07.660416987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:07.661089 containerd[1868]: time="2026-01-17T00:01:07.660955587Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.449422777s" Jan 17 00:01:07.661089 containerd[1868]: time="2026-01-17T00:01:07.660992787Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 17 00:01:07.661552 containerd[1868]: time="2026-01-17T00:01:07.661522827Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 17 00:01:08.410238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1177996066.mount: Deactivated successfully. Jan 17 00:01:09.025986 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 00:01:09.035389 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:01:09.299803 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:01:09.302667 (kubelet)[2650]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:01:09.335032 kubelet[2650]: E0117 00:01:09.334971 2650 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:01:09.338301 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:01:09.338460 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:01:10.369005 containerd[1868]: time="2026-01-17T00:01:10.367963098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:10.371188 containerd[1868]: time="2026-01-17T00:01:10.371153819Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jan 17 00:01:10.374493 containerd[1868]: time="2026-01-17T00:01:10.374470500Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:10.380253 containerd[1868]: time="2026-01-17T00:01:10.380212302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:10.381484 containerd[1868]: time="2026-01-17T00:01:10.381454862Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.719899155s" Jan 17 00:01:10.381578 containerd[1868]: time="2026-01-17T00:01:10.381563742Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 17 00:01:10.382717 containerd[1868]: time="2026-01-17T00:01:10.382699463Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 17 00:01:11.002558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3916112454.mount: Deactivated successfully. Jan 17 00:01:11.035753 containerd[1868]: time="2026-01-17T00:01:11.035712955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:11.039728 containerd[1868]: time="2026-01-17T00:01:11.039704156Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 17 00:01:11.043223 containerd[1868]: time="2026-01-17T00:01:11.043201797Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:11.049184 containerd[1868]: time="2026-01-17T00:01:11.049139639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:11.049958 containerd[1868]: time="2026-01-17T00:01:11.049935119Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 667.143176ms" Jan 17 00:01:11.050054 containerd[1868]: time="2026-01-17T00:01:11.050038799Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 17 00:01:11.050605 containerd[1868]: time="2026-01-17T00:01:11.050557239Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 17 00:01:11.748883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3248802800.mount: Deactivated successfully. Jan 17 00:01:13.558498 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 17 00:01:14.998860 containerd[1868]: time="2026-01-17T00:01:14.998813032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:15.002499 containerd[1868]: time="2026-01-17T00:01:15.002214871Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Jan 17 00:01:15.007794 containerd[1868]: time="2026-01-17T00:01:15.006271830Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:15.012708 containerd[1868]: time="2026-01-17T00:01:15.012678948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:15.013792 containerd[1868]: time="2026-01-17T00:01:15.013759708Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.963012989s" Jan 17 00:01:15.013792 containerd[1868]: time="2026-01-17T00:01:15.013790948Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 17 00:01:19.525909 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 17 00:01:19.533507 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:01:19.874308 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:01:19.884522 (kubelet)[2780]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:01:19.926180 kubelet[2780]: E0117 00:01:19.925532 2780 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:01:19.929924 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:01:19.930109 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:01:21.260473 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:01:21.267473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:01:21.297523 systemd[1]: Reloading requested from client PID 2797 ('systemctl') (unit session-9.scope)... Jan 17 00:01:21.297540 systemd[1]: Reloading... Jan 17 00:01:21.399314 zram_generator::config[2838]: No configuration found. Jan 17 00:01:21.499154 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:01:21.574973 systemd[1]: Reloading finished in 277 ms. Jan 17 00:01:21.622264 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 00:01:21.622324 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 00:01:21.622792 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:01:21.630002 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:01:21.806316 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:01:21.813515 (kubelet)[2916]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:01:21.843256 update_engine[1824]: I20260117 00:01:21.843106 1824 update_attempter.cc:509] Updating boot flags... Jan 17 00:01:21.848471 kubelet[2916]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:01:21.848471 kubelet[2916]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:01:21.848471 kubelet[2916]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:01:21.848752 kubelet[2916]: I0117 00:01:21.848531 2916 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:01:22.380521 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2933) Jan 17 00:01:22.492319 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2937) Jan 17 00:01:22.620976 kubelet[2916]: I0117 00:01:22.620939 2916 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:01:22.620976 kubelet[2916]: I0117 00:01:22.620969 2916 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:01:22.621254 kubelet[2916]: I0117 00:01:22.621238 2916 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:01:22.641931 kubelet[2916]: E0117 00:01:22.641567 2916 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:01:22.645549 kubelet[2916]: I0117 00:01:22.645392 2916 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:01:22.649599 kubelet[2916]: E0117 00:01:22.649578 2916 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:01:22.649693 kubelet[2916]: I0117 00:01:22.649683 2916 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:01:22.652977 kubelet[2916]: I0117 00:01:22.652959 2916 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:01:22.654568 kubelet[2916]: I0117 00:01:22.654538 2916 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:01:22.655097 kubelet[2916]: I0117 00:01:22.654642 2916 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-e1db9b2d97","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 17 00:01:22.655097 kubelet[2916]: I0117 00:01:22.654834 2916 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:01:22.655097 kubelet[2916]: I0117 00:01:22.654843 2916 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:01:22.655097 kubelet[2916]: I0117 00:01:22.654961 2916 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:01:22.657599 kubelet[2916]: I0117 00:01:22.657585 2916 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:01:22.657679 kubelet[2916]: I0117 00:01:22.657671 2916 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:01:22.657745 kubelet[2916]: I0117 00:01:22.657736 2916 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:01:22.657796 kubelet[2916]: I0117 00:01:22.657788 2916 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:01:22.663022 kubelet[2916]: W0117 00:01:22.662980 2916 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Jan 17 00:01:22.663094 kubelet[2916]: E0117 00:01:22.663033 2916 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:01:22.663513 kubelet[2916]: W0117 00:01:22.663475 2916 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-e1db9b2d97&limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Jan 17 00:01:22.663574 kubelet[2916]: E0117 00:01:22.663533 2916 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-e1db9b2d97&limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:01:22.663680 kubelet[2916]: I0117 00:01:22.663664 2916 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:01:22.664255 kubelet[2916]: I0117 00:01:22.664241 2916 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:01:22.664301 kubelet[2916]: W0117 00:01:22.664297 2916 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:01:22.667737 kubelet[2916]: I0117 00:01:22.667557 2916 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:01:22.667737 kubelet[2916]: I0117 00:01:22.667593 2916 server.go:1287] "Started kubelet" Jan 17 00:01:22.668537 kubelet[2916]: I0117 00:01:22.668511 2916 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:01:22.670012 kubelet[2916]: I0117 00:01:22.669992 2916 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:01:22.673167 kubelet[2916]: I0117 00:01:22.673107 2916 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:01:22.674163 kubelet[2916]: I0117 00:01:22.673490 2916 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:01:22.674163 kubelet[2916]: E0117 00:01:22.673696 2916 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.34:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.34:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-e1db9b2d97.188b5bae42490c6e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-e1db9b2d97,UID:ci-4081.3.6-n-e1db9b2d97,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-e1db9b2d97,},FirstTimestamp:2026-01-17 00:01:22.667572334 +0000 UTC m=+0.851082318,LastTimestamp:2026-01-17 00:01:22.667572334 +0000 UTC m=+0.851082318,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-e1db9b2d97,}" Jan 17 00:01:22.674795 kubelet[2916]: I0117 00:01:22.674769 2916 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:01:22.676342 kubelet[2916]: I0117 00:01:22.676300 2916 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:01:22.678027 kubelet[2916]: E0117 00:01:22.677815 2916 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:01:22.678473 kubelet[2916]: E0117 00:01:22.678454 2916 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-e1db9b2d97\" not found" Jan 17 00:01:22.678595 kubelet[2916]: I0117 00:01:22.678587 2916 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:01:22.678874 kubelet[2916]: I0117 00:01:22.678864 2916 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:01:22.679066 kubelet[2916]: I0117 00:01:22.679056 2916 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:01:22.679437 kubelet[2916]: W0117 00:01:22.679397 2916 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Jan 17 00:01:22.680228 kubelet[2916]: E0117 00:01:22.679527 2916 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:01:22.680228 kubelet[2916]: I0117 00:01:22.679676 2916 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:01:22.680228 kubelet[2916]: I0117 00:01:22.679744 2916 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:01:22.680666 kubelet[2916]: E0117 00:01:22.680471 2916 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-e1db9b2d97?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="200ms" Jan 17 00:01:22.681216 kubelet[2916]: I0117 00:01:22.680915 2916 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:01:22.711621 kubelet[2916]: I0117 00:01:22.711588 2916 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:01:22.712926 kubelet[2916]: I0117 00:01:22.712903 2916 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:01:22.713019 kubelet[2916]: I0117 00:01:22.713010 2916 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:01:22.713079 kubelet[2916]: I0117 00:01:22.713071 2916 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:01:22.713153 kubelet[2916]: I0117 00:01:22.713146 2916 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:01:22.713267 kubelet[2916]: E0117 00:01:22.713252 2916 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:01:22.714909 kubelet[2916]: W0117 00:01:22.714886 2916 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Jan 17 00:01:22.715019 kubelet[2916]: E0117 00:01:22.715003 2916 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:01:22.756879 kubelet[2916]: I0117 00:01:22.756852 2916 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:01:22.756879 kubelet[2916]: I0117 00:01:22.756871 2916 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:01:22.757005 kubelet[2916]: I0117 00:01:22.756890 2916 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:01:22.762274 kubelet[2916]: I0117 00:01:22.762253 2916 policy_none.go:49] "None policy: Start" Jan 17 00:01:22.762274 kubelet[2916]: I0117 00:01:22.762276 2916 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:01:22.762356 kubelet[2916]: I0117 00:01:22.762287 2916 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:01:22.772588 kubelet[2916]: I0117 00:01:22.771488 2916 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:01:22.772588 kubelet[2916]: I0117 00:01:22.771664 2916 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:01:22.772588 kubelet[2916]: I0117 00:01:22.771674 2916 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:01:22.772588 kubelet[2916]: I0117 00:01:22.772423 2916 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:01:22.773524 kubelet[2916]: E0117 00:01:22.773499 2916 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:01:22.773579 kubelet[2916]: E0117 00:01:22.773542 2916 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-e1db9b2d97\" not found" Jan 17 00:01:22.818557 kubelet[2916]: E0117 00:01:22.818489 2916 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-e1db9b2d97\" not found" node="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:22.823206 kubelet[2916]: E0117 00:01:22.822881 2916 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-e1db9b2d97\" not found" node="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:22.824086 kubelet[2916]: E0117 00:01:22.824058 2916 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-e1db9b2d97\" not found" node="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:22.873064 kubelet[2916]: I0117 00:01:22.873045 2916 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:22.873457 kubelet[2916]: E0117 00:01:22.873416 2916 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:22.880778 kubelet[2916]: I0117 00:01:22.880752 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/81cb62e1b9e18a45f52c7c7274fa77ff-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-e1db9b2d97\" (UID: \"81cb62e1b9e18a45f52c7c7274fa77ff\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:22.880852 kubelet[2916]: I0117 00:01:22.880781 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5f00be6668289ccf1dd12e434e723c88-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-e1db9b2d97\" (UID: \"5f00be6668289ccf1dd12e434e723c88\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:22.880852 kubelet[2916]: I0117 00:01:22.880803 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5f00be6668289ccf1dd12e434e723c88-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-e1db9b2d97\" (UID: \"5f00be6668289ccf1dd12e434e723c88\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:22.880852 kubelet[2916]: I0117 00:01:22.880818 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5f00be6668289ccf1dd12e434e723c88-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-e1db9b2d97\" (UID: \"5f00be6668289ccf1dd12e434e723c88\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:22.880852 kubelet[2916]: I0117 00:01:22.880839 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/81cb62e1b9e18a45f52c7c7274fa77ff-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-e1db9b2d97\" (UID: \"81cb62e1b9e18a45f52c7c7274fa77ff\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:22.880980 kubelet[2916]: I0117 00:01:22.880856 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/81cb62e1b9e18a45f52c7c7274fa77ff-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-e1db9b2d97\" (UID: \"81cb62e1b9e18a45f52c7c7274fa77ff\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:22.880980 kubelet[2916]: I0117 00:01:22.880871 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/81cb62e1b9e18a45f52c7c7274fa77ff-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-e1db9b2d97\" (UID: \"81cb62e1b9e18a45f52c7c7274fa77ff\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:22.880980 kubelet[2916]: I0117 00:01:22.880884 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/81cb62e1b9e18a45f52c7c7274fa77ff-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-e1db9b2d97\" (UID: \"81cb62e1b9e18a45f52c7c7274fa77ff\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:22.880980 kubelet[2916]: I0117 00:01:22.880899 2916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f8c4515d94dec88581d5707b3228aeb2-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-e1db9b2d97\" (UID: \"f8c4515d94dec88581d5707b3228aeb2\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:22.881215 kubelet[2916]: E0117 00:01:22.881193 2916 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-e1db9b2d97?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="400ms" Jan 17 00:01:23.075550 kubelet[2916]: I0117 00:01:23.075522 2916 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:23.075880 kubelet[2916]: E0117 00:01:23.075858 2916 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:23.120132 containerd[1868]: time="2026-01-17T00:01:23.119831170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-e1db9b2d97,Uid:5f00be6668289ccf1dd12e434e723c88,Namespace:kube-system,Attempt:0,}" Jan 17 00:01:23.123737 containerd[1868]: time="2026-01-17T00:01:23.123706010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-e1db9b2d97,Uid:81cb62e1b9e18a45f52c7c7274fa77ff,Namespace:kube-system,Attempt:0,}" Jan 17 00:01:23.125498 containerd[1868]: time="2026-01-17T00:01:23.125380649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-e1db9b2d97,Uid:f8c4515d94dec88581d5707b3228aeb2,Namespace:kube-system,Attempt:0,}" Jan 17 00:01:23.281994 kubelet[2916]: E0117 00:01:23.281950 2916 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-e1db9b2d97?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="800ms" Jan 17 00:01:23.478252 kubelet[2916]: I0117 00:01:23.478094 2916 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:23.478451 kubelet[2916]: E0117 00:01:23.478423 2916 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:23.569591 kubelet[2916]: W0117 00:01:23.569538 2916 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Jan 17 00:01:23.569709 kubelet[2916]: E0117 00:01:23.569600 2916 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:01:23.819696 kubelet[2916]: W0117 00:01:23.819638 2916 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Jan 17 00:01:23.819824 kubelet[2916]: E0117 00:01:23.819704 2916 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:01:23.908692 kubelet[2916]: E0117 00:01:23.908585 2916 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.34:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.34:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-e1db9b2d97.188b5bae42490c6e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-e1db9b2d97,UID:ci-4081.3.6-n-e1db9b2d97,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-e1db9b2d97,},FirstTimestamp:2026-01-17 00:01:22.667572334 +0000 UTC m=+0.851082318,LastTimestamp:2026-01-17 00:01:22.667572334 +0000 UTC m=+0.851082318,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-e1db9b2d97,}" Jan 17 00:01:23.925511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1939866287.mount: Deactivated successfully. Jan 17 00:01:23.947932 containerd[1868]: time="2026-01-17T00:01:23.947886369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:01:23.959457 containerd[1868]: time="2026-01-17T00:01:23.959428848Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 17 00:01:23.962880 containerd[1868]: time="2026-01-17T00:01:23.962847888Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:01:23.966242 containerd[1868]: time="2026-01-17T00:01:23.966219527Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:01:23.968082 kubelet[2916]: W0117 00:01:23.967994 2916 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-e1db9b2d97&limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Jan 17 00:01:23.968082 kubelet[2916]: E0117 00:01:23.968059 2916 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-e1db9b2d97&limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:01:23.970063 containerd[1868]: time="2026-01-17T00:01:23.970029887Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:01:23.972949 containerd[1868]: time="2026-01-17T00:01:23.972885647Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:01:23.977433 containerd[1868]: time="2026-01-17T00:01:23.977400846Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:01:23.980304 containerd[1868]: time="2026-01-17T00:01:23.980194606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:01:23.981344 containerd[1868]: time="2026-01-17T00:01:23.981090526Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 861.186756ms" Jan 17 00:01:23.986190 containerd[1868]: time="2026-01-17T00:01:23.986138246Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 860.705557ms" Jan 17 00:01:24.002479 containerd[1868]: time="2026-01-17T00:01:24.002440764Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 878.676914ms" Jan 17 00:01:24.082437 kubelet[2916]: E0117 00:01:24.082328 2916 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-e1db9b2d97?timeout=10s\": dial tcp 10.200.20.34:6443: connect: connection refused" interval="1.6s" Jan 17 00:01:24.156054 kubelet[2916]: W0117 00:01:24.155961 2916 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.34:6443: connect: connection refused Jan 17 00:01:24.156054 kubelet[2916]: E0117 00:01:24.156031 2916 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:01:24.280512 kubelet[2916]: I0117 00:01:24.280480 2916 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:24.280865 kubelet[2916]: E0117 00:01:24.280843 2916 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.34:6443/api/v1/nodes\": dial tcp 10.200.20.34:6443: connect: connection refused" node="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:24.521064 containerd[1868]: time="2026-01-17T00:01:24.520831564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:01:24.521064 containerd[1868]: time="2026-01-17T00:01:24.520912484Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:01:24.521678 containerd[1868]: time="2026-01-17T00:01:24.521481324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:24.521678 containerd[1868]: time="2026-01-17T00:01:24.521596564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:01:24.521678 containerd[1868]: time="2026-01-17T00:01:24.521657204Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:01:24.521750 containerd[1868]: time="2026-01-17T00:01:24.521689204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:24.522971 containerd[1868]: time="2026-01-17T00:01:24.522869325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:01:24.523210 containerd[1868]: time="2026-01-17T00:01:24.522947165Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:01:24.523692 containerd[1868]: time="2026-01-17T00:01:24.523506805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:24.523692 containerd[1868]: time="2026-01-17T00:01:24.523644045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:24.524354 containerd[1868]: time="2026-01-17T00:01:24.524273885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:24.527523 containerd[1868]: time="2026-01-17T00:01:24.524810845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:24.587730 containerd[1868]: time="2026-01-17T00:01:24.587692260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-e1db9b2d97,Uid:81cb62e1b9e18a45f52c7c7274fa77ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"7387b8e8036e216d7cba6737c554e7cdc27e363987fc14d7c2c7da9c985cd59a\"" Jan 17 00:01:24.592466 containerd[1868]: time="2026-01-17T00:01:24.592432061Z" level=info msg="CreateContainer within sandbox \"7387b8e8036e216d7cba6737c554e7cdc27e363987fc14d7c2c7da9c985cd59a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:01:24.594993 containerd[1868]: time="2026-01-17T00:01:24.594959862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-e1db9b2d97,Uid:f8c4515d94dec88581d5707b3228aeb2,Namespace:kube-system,Attempt:0,} returns sandbox id \"6acd08504984cff73a64b5531edace7474bd1c70f55c37466560412560d08f5a\"" Jan 17 00:01:24.596931 containerd[1868]: time="2026-01-17T00:01:24.596904862Z" level=info msg="CreateContainer within sandbox \"6acd08504984cff73a64b5531edace7474bd1c70f55c37466560412560d08f5a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:01:24.600594 containerd[1868]: time="2026-01-17T00:01:24.600570623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-e1db9b2d97,Uid:5f00be6668289ccf1dd12e434e723c88,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb09bfc884f16b4c0b7058686ab127085d4986dbdb6af94e0b7a656dbcf6b92b\"" Jan 17 00:01:24.602677 containerd[1868]: time="2026-01-17T00:01:24.602657584Z" level=info msg="CreateContainer within sandbox \"fb09bfc884f16b4c0b7058686ab127085d4986dbdb6af94e0b7a656dbcf6b92b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:01:24.677022 containerd[1868]: time="2026-01-17T00:01:24.676976281Z" level=info msg="CreateContainer within sandbox \"6acd08504984cff73a64b5531edace7474bd1c70f55c37466560412560d08f5a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"72474d5e4a8636b4afcc698b82e6d30454938411a327e04cf7f4c643644170dc\"" Jan 17 00:01:24.677828 containerd[1868]: time="2026-01-17T00:01:24.677807921Z" level=info msg="StartContainer for \"72474d5e4a8636b4afcc698b82e6d30454938411a327e04cf7f4c643644170dc\"" Jan 17 00:01:24.693618 containerd[1868]: time="2026-01-17T00:01:24.693560965Z" level=info msg="CreateContainer within sandbox \"fb09bfc884f16b4c0b7058686ab127085d4986dbdb6af94e0b7a656dbcf6b92b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9c0afac1c8d3c28307c9c3e8c90e84df2a7d58ab17b0f0738c391e8f4e8a192a\"" Jan 17 00:01:24.694057 containerd[1868]: time="2026-01-17T00:01:24.694023205Z" level=info msg="StartContainer for \"9c0afac1c8d3c28307c9c3e8c90e84df2a7d58ab17b0f0738c391e8f4e8a192a\"" Jan 17 00:01:24.699159 containerd[1868]: time="2026-01-17T00:01:24.699041966Z" level=info msg="CreateContainer within sandbox \"7387b8e8036e216d7cba6737c554e7cdc27e363987fc14d7c2c7da9c985cd59a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"37d54811d5328bc6200e4e007daa184ba1b25120f6dd94b69316bd44e4ccd532\"" Jan 17 00:01:24.705536 containerd[1868]: time="2026-01-17T00:01:24.705515328Z" level=info msg="StartContainer for \"37d54811d5328bc6200e4e007daa184ba1b25120f6dd94b69316bd44e4ccd532\"" Jan 17 00:01:24.746545 containerd[1868]: time="2026-01-17T00:01:24.746198258Z" level=info msg="StartContainer for \"72474d5e4a8636b4afcc698b82e6d30454938411a327e04cf7f4c643644170dc\" returns successfully" Jan 17 00:01:24.805904 containerd[1868]: time="2026-01-17T00:01:24.805386432Z" level=info msg="StartContainer for \"37d54811d5328bc6200e4e007daa184ba1b25120f6dd94b69316bd44e4ccd532\" returns successfully" Jan 17 00:01:24.816279 containerd[1868]: time="2026-01-17T00:01:24.816239234Z" level=info msg="StartContainer for \"9c0afac1c8d3c28307c9c3e8c90e84df2a7d58ab17b0f0738c391e8f4e8a192a\" returns successfully" Jan 17 00:01:24.824788 kubelet[2916]: E0117 00:01:24.824753 2916 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.34:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:01:25.749631 kubelet[2916]: E0117 00:01:25.746575 2916 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-e1db9b2d97\" not found" node="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:25.758426 kubelet[2916]: E0117 00:01:25.758232 2916 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-e1db9b2d97\" not found" node="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:25.761332 kubelet[2916]: E0117 00:01:25.761307 2916 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-e1db9b2d97\" not found" node="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:25.885260 kubelet[2916]: I0117 00:01:25.885225 2916 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:26.666212 kubelet[2916]: I0117 00:01:26.664283 2916 apiserver.go:52] "Watching apiserver" Jan 17 00:01:26.704198 kubelet[2916]: E0117 00:01:26.704121 2916 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-e1db9b2d97\" not found" node="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:26.748873 kubelet[2916]: I0117 00:01:26.748739 2916 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:26.767271 kubelet[2916]: I0117 00:01:26.765913 2916 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:26.769252 kubelet[2916]: I0117 00:01:26.767849 2916 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:26.773186 kubelet[2916]: I0117 00:01:26.770331 2916 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:26.779056 kubelet[2916]: I0117 00:01:26.779035 2916 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:01:26.780679 kubelet[2916]: I0117 00:01:26.780651 2916 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:26.842502 kubelet[2916]: E0117 00:01:26.842310 2916 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-e1db9b2d97\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:26.842853 kubelet[2916]: E0117 00:01:26.842756 2916 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-e1db9b2d97\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:26.844180 kubelet[2916]: E0117 00:01:26.843162 2916 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-e1db9b2d97\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:26.844358 kubelet[2916]: E0117 00:01:26.844341 2916 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-e1db9b2d97\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:26.844602 kubelet[2916]: I0117 00:01:26.844425 2916 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:26.846036 kubelet[2916]: E0117 00:01:26.846016 2916 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-e1db9b2d97\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:26.847181 kubelet[2916]: I0117 00:01:26.846216 2916 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:26.849363 kubelet[2916]: E0117 00:01:26.849341 2916 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-e1db9b2d97\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:27.768321 kubelet[2916]: I0117 00:01:27.768077 2916 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:27.768321 kubelet[2916]: I0117 00:01:27.768141 2916 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:27.777146 kubelet[2916]: W0117 00:01:27.776941 2916 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 00:01:27.779467 kubelet[2916]: W0117 00:01:27.779453 2916 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 00:01:28.842020 systemd[1]: Reloading requested from client PID 3249 ('systemctl') (unit session-9.scope)... Jan 17 00:01:28.842034 systemd[1]: Reloading... Jan 17 00:01:28.919321 zram_generator::config[3289]: No configuration found. Jan 17 00:01:29.034769 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:01:29.117868 systemd[1]: Reloading finished in 275 ms. Jan 17 00:01:29.146617 kubelet[2916]: I0117 00:01:29.146509 2916 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:01:29.146686 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:01:29.159679 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:01:29.160125 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:01:29.166598 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:01:29.381920 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:01:29.390483 (kubelet)[3363]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:01:29.435015 kubelet[3363]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:01:29.435015 kubelet[3363]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:01:29.435015 kubelet[3363]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:01:29.435433 kubelet[3363]: I0117 00:01:29.435073 3363 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:01:29.440328 kubelet[3363]: I0117 00:01:29.440297 3363 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:01:29.441746 kubelet[3363]: I0117 00:01:29.440421 3363 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:01:29.441746 kubelet[3363]: I0117 00:01:29.440671 3363 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:01:29.442069 kubelet[3363]: I0117 00:01:29.442054 3363 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 00:01:29.447374 kubelet[3363]: I0117 00:01:29.447353 3363 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:01:29.457703 kubelet[3363]: E0117 00:01:29.456972 3363 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:01:29.458006 kubelet[3363]: I0117 00:01:29.457980 3363 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:01:29.461030 kubelet[3363]: I0117 00:01:29.461005 3363 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:01:29.461624 kubelet[3363]: I0117 00:01:29.461590 3363 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:01:29.461869 kubelet[3363]: I0117 00:01:29.461699 3363 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-e1db9b2d97","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 17 00:01:29.461986 kubelet[3363]: I0117 00:01:29.461976 3363 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:01:29.462050 kubelet[3363]: I0117 00:01:29.462042 3363 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:01:29.462147 kubelet[3363]: I0117 00:01:29.462138 3363 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:01:29.462354 kubelet[3363]: I0117 00:01:29.462342 3363 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:01:29.462432 kubelet[3363]: I0117 00:01:29.462423 3363 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:01:29.462492 kubelet[3363]: I0117 00:01:29.462485 3363 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:01:29.462546 kubelet[3363]: I0117 00:01:29.462537 3363 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:01:29.468303 kubelet[3363]: I0117 00:01:29.468280 3363 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:01:29.470543 kubelet[3363]: I0117 00:01:29.470220 3363 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:01:29.470659 kubelet[3363]: I0117 00:01:29.470644 3363 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:01:29.470692 kubelet[3363]: I0117 00:01:29.470671 3363 server.go:1287] "Started kubelet" Jan 17 00:01:29.473644 kubelet[3363]: I0117 00:01:29.472828 3363 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:01:29.473644 kubelet[3363]: I0117 00:01:29.473571 3363 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:01:29.473975 kubelet[3363]: I0117 00:01:29.473931 3363 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:01:29.475229 kubelet[3363]: I0117 00:01:29.475210 3363 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:01:29.483296 kubelet[3363]: I0117 00:01:29.481793 3363 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:01:29.490588 kubelet[3363]: I0117 00:01:29.490562 3363 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:01:29.491685 kubelet[3363]: I0117 00:01:29.491671 3363 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:01:29.491884 kubelet[3363]: E0117 00:01:29.491866 3363 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-e1db9b2d97\" not found" Jan 17 00:01:29.498535 kubelet[3363]: I0117 00:01:29.498513 3363 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:01:29.498647 kubelet[3363]: I0117 00:01:29.498635 3363 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:01:29.501832 kubelet[3363]: I0117 00:01:29.501208 3363 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:01:29.502132 kubelet[3363]: I0117 00:01:29.502112 3363 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:01:29.502132 kubelet[3363]: I0117 00:01:29.502134 3363 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:01:29.502281 kubelet[3363]: I0117 00:01:29.502152 3363 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:01:29.502281 kubelet[3363]: I0117 00:01:29.502159 3363 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:01:29.502281 kubelet[3363]: E0117 00:01:29.502216 3363 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:01:29.513699 kubelet[3363]: I0117 00:01:29.513054 3363 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:01:29.513936 kubelet[3363]: I0117 00:01:29.513916 3363 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:01:29.516264 kubelet[3363]: I0117 00:01:29.516247 3363 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:01:29.566652 kubelet[3363]: I0117 00:01:29.566625 3363 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:01:29.566652 kubelet[3363]: I0117 00:01:29.566643 3363 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:01:29.566652 kubelet[3363]: I0117 00:01:29.566664 3363 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:01:29.566836 kubelet[3363]: I0117 00:01:29.566817 3363 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:01:29.566869 kubelet[3363]: I0117 00:01:29.566833 3363 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:01:29.566869 kubelet[3363]: I0117 00:01:29.566852 3363 policy_none.go:49] "None policy: Start" Jan 17 00:01:29.566869 kubelet[3363]: I0117 00:01:29.566860 3363 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:01:29.566933 kubelet[3363]: I0117 00:01:29.566874 3363 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:01:29.566978 kubelet[3363]: I0117 00:01:29.566965 3363 state_mem.go:75] "Updated machine memory state" Jan 17 00:01:29.568456 kubelet[3363]: I0117 00:01:29.568112 3363 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:01:29.568456 kubelet[3363]: I0117 00:01:29.568296 3363 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:01:29.568456 kubelet[3363]: I0117 00:01:29.568306 3363 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:01:29.569611 kubelet[3363]: I0117 00:01:29.569086 3363 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:01:29.570412 kubelet[3363]: E0117 00:01:29.570390 3363 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:01:29.603668 kubelet[3363]: I0117 00:01:29.603628 3363 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:29.604197 kubelet[3363]: I0117 00:01:29.604034 3363 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:29.604394 kubelet[3363]: I0117 00:01:29.604380 3363 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:29.617365 kubelet[3363]: W0117 00:01:29.617332 3363 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 00:01:29.617712 kubelet[3363]: E0117 00:01:29.617637 3363 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-e1db9b2d97\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:29.617712 kubelet[3363]: W0117 00:01:29.617544 3363 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 00:01:29.618501 kubelet[3363]: W0117 00:01:29.618477 3363 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 00:01:29.618665 kubelet[3363]: E0117 00:01:29.618617 3363 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-e1db9b2d97\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:29.671048 kubelet[3363]: I0117 00:01:29.670974 3363 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:29.691452 kubelet[3363]: I0117 00:01:29.691141 3363 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:29.691452 kubelet[3363]: I0117 00:01:29.691234 3363 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:29.800361 kubelet[3363]: I0117 00:01:29.800196 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/81cb62e1b9e18a45f52c7c7274fa77ff-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-e1db9b2d97\" (UID: \"81cb62e1b9e18a45f52c7c7274fa77ff\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:29.800361 kubelet[3363]: I0117 00:01:29.800244 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/81cb62e1b9e18a45f52c7c7274fa77ff-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-e1db9b2d97\" (UID: \"81cb62e1b9e18a45f52c7c7274fa77ff\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:29.800361 kubelet[3363]: I0117 00:01:29.800266 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f8c4515d94dec88581d5707b3228aeb2-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-e1db9b2d97\" (UID: \"f8c4515d94dec88581d5707b3228aeb2\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:29.800361 kubelet[3363]: I0117 00:01:29.800285 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5f00be6668289ccf1dd12e434e723c88-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-e1db9b2d97\" (UID: \"5f00be6668289ccf1dd12e434e723c88\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:29.800361 kubelet[3363]: I0117 00:01:29.800311 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/81cb62e1b9e18a45f52c7c7274fa77ff-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-e1db9b2d97\" (UID: \"81cb62e1b9e18a45f52c7c7274fa77ff\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:29.800596 kubelet[3363]: I0117 00:01:29.800378 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/81cb62e1b9e18a45f52c7c7274fa77ff-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-e1db9b2d97\" (UID: \"81cb62e1b9e18a45f52c7c7274fa77ff\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:29.800596 kubelet[3363]: I0117 00:01:29.800409 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/81cb62e1b9e18a45f52c7c7274fa77ff-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-e1db9b2d97\" (UID: \"81cb62e1b9e18a45f52c7c7274fa77ff\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:29.800596 kubelet[3363]: I0117 00:01:29.800427 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5f00be6668289ccf1dd12e434e723c88-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-e1db9b2d97\" (UID: \"5f00be6668289ccf1dd12e434e723c88\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:29.800596 kubelet[3363]: I0117 00:01:29.800443 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5f00be6668289ccf1dd12e434e723c88-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-e1db9b2d97\" (UID: \"5f00be6668289ccf1dd12e434e723c88\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:30.465363 kubelet[3363]: I0117 00:01:30.464217 3363 apiserver.go:52] "Watching apiserver" Jan 17 00:01:30.499544 kubelet[3363]: I0117 00:01:30.499449 3363 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:01:30.552382 kubelet[3363]: I0117 00:01:30.552346 3363 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:30.554361 kubelet[3363]: I0117 00:01:30.552735 3363 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:30.580522 kubelet[3363]: W0117 00:01:30.580485 3363 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 00:01:30.580632 kubelet[3363]: E0117 00:01:30.580546 3363 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-e1db9b2d97\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:30.580766 kubelet[3363]: W0117 00:01:30.580748 3363 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 00:01:30.580804 kubelet[3363]: E0117 00:01:30.580779 3363 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-e1db9b2d97\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-e1db9b2d97" Jan 17 00:01:30.598182 kubelet[3363]: I0117 00:01:30.598102 3363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-e1db9b2d97" podStartSLOduration=3.598083794 podStartE2EDuration="3.598083794s" podCreationTimestamp="2026-01-17 00:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:01:30.580338717 +0000 UTC m=+1.185643991" watchObservedRunningTime="2026-01-17 00:01:30.598083794 +0000 UTC m=+1.203389028" Jan 17 00:01:30.598348 kubelet[3363]: I0117 00:01:30.598258 3363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-e1db9b2d97" podStartSLOduration=3.598251554 podStartE2EDuration="3.598251554s" podCreationTimestamp="2026-01-17 00:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:01:30.596985554 +0000 UTC m=+1.202290828" watchObservedRunningTime="2026-01-17 00:01:30.598251554 +0000 UTC m=+1.203556828" Jan 17 00:01:30.619341 kubelet[3363]: I0117 00:01:30.618157 3363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e1db9b2d97" podStartSLOduration=1.618142671 podStartE2EDuration="1.618142671s" podCreationTimestamp="2026-01-17 00:01:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:01:30.617886311 +0000 UTC m=+1.223191585" watchObservedRunningTime="2026-01-17 00:01:30.618142671 +0000 UTC m=+1.223447945" Jan 17 00:01:34.637479 kubelet[3363]: I0117 00:01:34.637442 3363 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:01:34.638115 kubelet[3363]: I0117 00:01:34.637937 3363 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:01:34.638229 containerd[1868]: time="2026-01-17T00:01:34.637717398Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:01:35.631186 kubelet[3363]: I0117 00:01:35.631137 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3e3411c2-b48e-4411-ac74-562dd78cbb45-kube-proxy\") pod \"kube-proxy-jknpk\" (UID: \"3e3411c2-b48e-4411-ac74-562dd78cbb45\") " pod="kube-system/kube-proxy-jknpk" Jan 17 00:01:35.631471 kubelet[3363]: I0117 00:01:35.631370 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e3411c2-b48e-4411-ac74-562dd78cbb45-xtables-lock\") pod \"kube-proxy-jknpk\" (UID: \"3e3411c2-b48e-4411-ac74-562dd78cbb45\") " pod="kube-system/kube-proxy-jknpk" Jan 17 00:01:35.631471 kubelet[3363]: I0117 00:01:35.631404 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e3411c2-b48e-4411-ac74-562dd78cbb45-lib-modules\") pod \"kube-proxy-jknpk\" (UID: \"3e3411c2-b48e-4411-ac74-562dd78cbb45\") " pod="kube-system/kube-proxy-jknpk" Jan 17 00:01:35.631471 kubelet[3363]: I0117 00:01:35.631421 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dfm5\" (UniqueName: \"kubernetes.io/projected/3e3411c2-b48e-4411-ac74-562dd78cbb45-kube-api-access-5dfm5\") pod \"kube-proxy-jknpk\" (UID: \"3e3411c2-b48e-4411-ac74-562dd78cbb45\") " pod="kube-system/kube-proxy-jknpk" Jan 17 00:01:35.833297 kubelet[3363]: I0117 00:01:35.833253 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bczjx\" (UniqueName: \"kubernetes.io/projected/044f8554-1cac-4370-bac9-f8c76510c856-kube-api-access-bczjx\") pod \"tigera-operator-7dcd859c48-8r5tm\" (UID: \"044f8554-1cac-4370-bac9-f8c76510c856\") " pod="tigera-operator/tigera-operator-7dcd859c48-8r5tm" Jan 17 00:01:35.833297 kubelet[3363]: I0117 00:01:35.833301 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/044f8554-1cac-4370-bac9-f8c76510c856-var-lib-calico\") pod \"tigera-operator-7dcd859c48-8r5tm\" (UID: \"044f8554-1cac-4370-bac9-f8c76510c856\") " pod="tigera-operator/tigera-operator-7dcd859c48-8r5tm" Jan 17 00:01:35.913220 containerd[1868]: time="2026-01-17T00:01:35.912884133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jknpk,Uid:3e3411c2-b48e-4411-ac74-562dd78cbb45,Namespace:kube-system,Attempt:0,}" Jan 17 00:01:35.960231 containerd[1868]: time="2026-01-17T00:01:35.960141697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:01:35.960331 containerd[1868]: time="2026-01-17T00:01:35.960240057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:01:35.960331 containerd[1868]: time="2026-01-17T00:01:35.960274937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:35.960395 containerd[1868]: time="2026-01-17T00:01:35.960379377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:35.989161 containerd[1868]: time="2026-01-17T00:01:35.989125179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jknpk,Uid:3e3411c2-b48e-4411-ac74-562dd78cbb45,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9528be2663f83f5a6434dc4aaf2998f5c6cb857d28696c4ac90605a5b476c2c\"" Jan 17 00:01:35.991936 containerd[1868]: time="2026-01-17T00:01:35.991908579Z" level=info msg="CreateContainer within sandbox \"a9528be2663f83f5a6434dc4aaf2998f5c6cb857d28696c4ac90605a5b476c2c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:01:36.032504 containerd[1868]: time="2026-01-17T00:01:36.032469342Z" level=info msg="CreateContainer within sandbox \"a9528be2663f83f5a6434dc4aaf2998f5c6cb857d28696c4ac90605a5b476c2c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cc885f6c5d00a208de6426e7d5d99dcf9e5c0aa71386d869c8c35673b9527a13\"" Jan 17 00:01:36.033135 containerd[1868]: time="2026-01-17T00:01:36.033097942Z" level=info msg="StartContainer for \"cc885f6c5d00a208de6426e7d5d99dcf9e5c0aa71386d869c8c35673b9527a13\"" Jan 17 00:01:36.049041 containerd[1868]: time="2026-01-17T00:01:36.048428223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-8r5tm,Uid:044f8554-1cac-4370-bac9-f8c76510c856,Namespace:tigera-operator,Attempt:0,}" Jan 17 00:01:36.089449 containerd[1868]: time="2026-01-17T00:01:36.089403586Z" level=info msg="StartContainer for \"cc885f6c5d00a208de6426e7d5d99dcf9e5c0aa71386d869c8c35673b9527a13\" returns successfully" Jan 17 00:01:36.121245 containerd[1868]: time="2026-01-17T00:01:36.118877309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:01:36.121245 containerd[1868]: time="2026-01-17T00:01:36.119221869Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:01:36.121245 containerd[1868]: time="2026-01-17T00:01:36.119247189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:36.121245 containerd[1868]: time="2026-01-17T00:01:36.119795549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:36.169814 containerd[1868]: time="2026-01-17T00:01:36.169449312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-8r5tm,Uid:044f8554-1cac-4370-bac9-f8c76510c856,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5a7b6cd99b30ce33d7a3e110f46b298f1fe3dc8276652ece6ac599027240f316\"" Jan 17 00:01:36.173297 containerd[1868]: time="2026-01-17T00:01:36.173255993Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 17 00:01:36.573813 kubelet[3363]: I0117 00:01:36.573709 3363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jknpk" podStartSLOduration=1.573692423 podStartE2EDuration="1.573692423s" podCreationTimestamp="2026-01-17 00:01:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:01:36.573469303 +0000 UTC m=+7.178774577" watchObservedRunningTime="2026-01-17 00:01:36.573692423 +0000 UTC m=+7.178997697" Jan 17 00:01:38.015016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1285355905.mount: Deactivated successfully. Jan 17 00:01:38.439932 containerd[1868]: time="2026-01-17T00:01:38.439891962Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:38.443242 containerd[1868]: time="2026-01-17T00:01:38.443206562Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Jan 17 00:01:38.447815 containerd[1868]: time="2026-01-17T00:01:38.447785723Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:38.455662 containerd[1868]: time="2026-01-17T00:01:38.452932243Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:38.455662 containerd[1868]: time="2026-01-17T00:01:38.455028283Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.28172745s" Jan 17 00:01:38.455662 containerd[1868]: time="2026-01-17T00:01:38.455057243Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 17 00:01:38.460186 containerd[1868]: time="2026-01-17T00:01:38.460151964Z" level=info msg="CreateContainer within sandbox \"5a7b6cd99b30ce33d7a3e110f46b298f1fe3dc8276652ece6ac599027240f316\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 00:01:38.498282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1983843351.mount: Deactivated successfully. Jan 17 00:01:38.500624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2725202945.mount: Deactivated successfully. Jan 17 00:01:38.513356 containerd[1868]: time="2026-01-17T00:01:38.513301687Z" level=info msg="CreateContainer within sandbox \"5a7b6cd99b30ce33d7a3e110f46b298f1fe3dc8276652ece6ac599027240f316\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1b1f56f9ef92357956fd297defa590e3f879d8bc568c09df4e74968ad9482575\"" Jan 17 00:01:38.513849 containerd[1868]: time="2026-01-17T00:01:38.513749888Z" level=info msg="StartContainer for \"1b1f56f9ef92357956fd297defa590e3f879d8bc568c09df4e74968ad9482575\"" Jan 17 00:01:38.557902 containerd[1868]: time="2026-01-17T00:01:38.557859971Z" level=info msg="StartContainer for \"1b1f56f9ef92357956fd297defa590e3f879d8bc568c09df4e74968ad9482575\" returns successfully" Jan 17 00:01:40.747641 kubelet[3363]: I0117 00:01:40.747574 3363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-8r5tm" podStartSLOduration=3.460467723 podStartE2EDuration="5.747559093s" podCreationTimestamp="2026-01-17 00:01:35 +0000 UTC" firstStartedPulling="2026-01-17 00:01:36.170969593 +0000 UTC m=+6.776274867" lastFinishedPulling="2026-01-17 00:01:38.458060963 +0000 UTC m=+9.063366237" observedRunningTime="2026-01-17 00:01:38.587385773 +0000 UTC m=+9.192691047" watchObservedRunningTime="2026-01-17 00:01:40.747559093 +0000 UTC m=+11.352864367" Jan 17 00:01:44.756304 sudo[2373]: pam_unix(sudo:session): session closed for user root Jan 17 00:01:44.836377 sshd[2369]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:44.843436 systemd-logind[1817]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:01:44.843666 systemd[1]: sshd@6-10.200.20.34:22-10.200.16.10:40530.service: Deactivated successfully. Jan 17 00:01:44.845123 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:01:44.853329 systemd-logind[1817]: Removed session 9. Jan 17 00:01:53.042356 kubelet[3363]: I0117 00:01:53.042272 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xblmw\" (UniqueName: \"kubernetes.io/projected/4f56e22f-e115-4231-973c-ca1251b74ac2-kube-api-access-xblmw\") pod \"calico-typha-5658b4bcd-6nqz5\" (UID: \"4f56e22f-e115-4231-973c-ca1251b74ac2\") " pod="calico-system/calico-typha-5658b4bcd-6nqz5" Jan 17 00:01:53.042356 kubelet[3363]: I0117 00:01:53.042315 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4f56e22f-e115-4231-973c-ca1251b74ac2-typha-certs\") pod \"calico-typha-5658b4bcd-6nqz5\" (UID: \"4f56e22f-e115-4231-973c-ca1251b74ac2\") " pod="calico-system/calico-typha-5658b4bcd-6nqz5" Jan 17 00:01:53.042356 kubelet[3363]: I0117 00:01:53.042335 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f56e22f-e115-4231-973c-ca1251b74ac2-tigera-ca-bundle\") pod \"calico-typha-5658b4bcd-6nqz5\" (UID: \"4f56e22f-e115-4231-973c-ca1251b74ac2\") " pod="calico-system/calico-typha-5658b4bcd-6nqz5" Jan 17 00:01:53.243785 kubelet[3363]: I0117 00:01:53.243499 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/428685a7-80ab-43cd-996b-f87b6005c042-cni-log-dir\") pod \"calico-node-d6jzl\" (UID: \"428685a7-80ab-43cd-996b-f87b6005c042\") " pod="calico-system/calico-node-d6jzl" Jan 17 00:01:53.243785 kubelet[3363]: I0117 00:01:53.243532 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/428685a7-80ab-43cd-996b-f87b6005c042-lib-modules\") pod \"calico-node-d6jzl\" (UID: \"428685a7-80ab-43cd-996b-f87b6005c042\") " pod="calico-system/calico-node-d6jzl" Jan 17 00:01:53.243785 kubelet[3363]: I0117 00:01:53.243549 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/428685a7-80ab-43cd-996b-f87b6005c042-xtables-lock\") pod \"calico-node-d6jzl\" (UID: \"428685a7-80ab-43cd-996b-f87b6005c042\") " pod="calico-system/calico-node-d6jzl" Jan 17 00:01:53.243785 kubelet[3363]: I0117 00:01:53.243564 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4n62\" (UniqueName: \"kubernetes.io/projected/428685a7-80ab-43cd-996b-f87b6005c042-kube-api-access-n4n62\") pod \"calico-node-d6jzl\" (UID: \"428685a7-80ab-43cd-996b-f87b6005c042\") " pod="calico-system/calico-node-d6jzl" Jan 17 00:01:53.243785 kubelet[3363]: I0117 00:01:53.243583 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/428685a7-80ab-43cd-996b-f87b6005c042-tigera-ca-bundle\") pod \"calico-node-d6jzl\" (UID: \"428685a7-80ab-43cd-996b-f87b6005c042\") " pod="calico-system/calico-node-d6jzl" Jan 17 00:01:53.244046 kubelet[3363]: I0117 00:01:53.243598 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/428685a7-80ab-43cd-996b-f87b6005c042-policysync\") pod \"calico-node-d6jzl\" (UID: \"428685a7-80ab-43cd-996b-f87b6005c042\") " pod="calico-system/calico-node-d6jzl" Jan 17 00:01:53.244046 kubelet[3363]: I0117 00:01:53.243614 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/428685a7-80ab-43cd-996b-f87b6005c042-var-lib-calico\") pod \"calico-node-d6jzl\" (UID: \"428685a7-80ab-43cd-996b-f87b6005c042\") " pod="calico-system/calico-node-d6jzl" Jan 17 00:01:53.244046 kubelet[3363]: I0117 00:01:53.243639 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/428685a7-80ab-43cd-996b-f87b6005c042-var-run-calico\") pod \"calico-node-d6jzl\" (UID: \"428685a7-80ab-43cd-996b-f87b6005c042\") " pod="calico-system/calico-node-d6jzl" Jan 17 00:01:53.244046 kubelet[3363]: I0117 00:01:53.243655 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/428685a7-80ab-43cd-996b-f87b6005c042-flexvol-driver-host\") pod \"calico-node-d6jzl\" (UID: \"428685a7-80ab-43cd-996b-f87b6005c042\") " pod="calico-system/calico-node-d6jzl" Jan 17 00:01:53.244046 kubelet[3363]: I0117 00:01:53.243673 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/428685a7-80ab-43cd-996b-f87b6005c042-cni-bin-dir\") pod \"calico-node-d6jzl\" (UID: \"428685a7-80ab-43cd-996b-f87b6005c042\") " pod="calico-system/calico-node-d6jzl" Jan 17 00:01:53.244158 kubelet[3363]: I0117 00:01:53.243691 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/428685a7-80ab-43cd-996b-f87b6005c042-node-certs\") pod \"calico-node-d6jzl\" (UID: \"428685a7-80ab-43cd-996b-f87b6005c042\") " pod="calico-system/calico-node-d6jzl" Jan 17 00:01:53.244158 kubelet[3363]: I0117 00:01:53.243708 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/428685a7-80ab-43cd-996b-f87b6005c042-cni-net-dir\") pod \"calico-node-d6jzl\" (UID: \"428685a7-80ab-43cd-996b-f87b6005c042\") " pod="calico-system/calico-node-d6jzl" Jan 17 00:01:53.329495 containerd[1868]: time="2026-01-17T00:01:53.329036487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5658b4bcd-6nqz5,Uid:4f56e22f-e115-4231-973c-ca1251b74ac2,Namespace:calico-system,Attempt:0,}" Jan 17 00:01:53.351065 kubelet[3363]: E0117 00:01:53.350945 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.351397 kubelet[3363]: W0117 00:01:53.351192 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.351397 kubelet[3363]: E0117 00:01:53.351223 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.377209 containerd[1868]: time="2026-01-17T00:01:53.375345559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:01:53.377209 containerd[1868]: time="2026-01-17T00:01:53.375392719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:01:53.377209 containerd[1868]: time="2026-01-17T00:01:53.375417039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:53.377209 containerd[1868]: time="2026-01-17T00:01:53.375543239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:53.383016 kubelet[3363]: E0117 00:01:53.382419 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.383016 kubelet[3363]: W0117 00:01:53.382443 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.383016 kubelet[3363]: E0117 00:01:53.382650 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.411750 kubelet[3363]: E0117 00:01:53.411470 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6zk6p" podUID="e5fabfab-a45e-49bd-b3b5-28097628ac44" Jan 17 00:01:53.415036 kubelet[3363]: I0117 00:01:53.414789 3363 status_manager.go:890] "Failed to get status for pod" podUID="e5fabfab-a45e-49bd-b3b5-28097628ac44" pod="calico-system/csi-node-driver-6zk6p" err="pods \"csi-node-driver-6zk6p\" is forbidden: User \"system:node:ci-4081.3.6-n-e1db9b2d97\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.6-n-e1db9b2d97' and this object" Jan 17 00:01:53.438810 kubelet[3363]: E0117 00:01:53.438769 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.439412 kubelet[3363]: W0117 00:01:53.438887 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.439412 kubelet[3363]: E0117 00:01:53.439144 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.440247 kubelet[3363]: E0117 00:01:53.440232 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.440420 kubelet[3363]: W0117 00:01:53.440321 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.440420 kubelet[3363]: E0117 00:01:53.440373 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.440897 kubelet[3363]: E0117 00:01:53.440882 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.440989 kubelet[3363]: W0117 00:01:53.440977 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.441081 kubelet[3363]: E0117 00:01:53.441037 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.441379 kubelet[3363]: E0117 00:01:53.441282 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.441379 kubelet[3363]: W0117 00:01:53.441294 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.441379 kubelet[3363]: E0117 00:01:53.441306 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.441657 kubelet[3363]: E0117 00:01:53.441598 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.441657 kubelet[3363]: W0117 00:01:53.441610 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.441657 kubelet[3363]: E0117 00:01:53.441621 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.442084 kubelet[3363]: E0117 00:01:53.442018 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.442084 kubelet[3363]: W0117 00:01:53.442031 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.442084 kubelet[3363]: E0117 00:01:53.442046 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.442470 kubelet[3363]: E0117 00:01:53.442393 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.442470 kubelet[3363]: W0117 00:01:53.442405 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.442470 kubelet[3363]: E0117 00:01:53.442416 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.445251 kubelet[3363]: E0117 00:01:53.442728 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.445251 kubelet[3363]: W0117 00:01:53.442739 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.445476 kubelet[3363]: E0117 00:01:53.442750 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.445803 kubelet[3363]: E0117 00:01:53.445786 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.445945 kubelet[3363]: W0117 00:01:53.445884 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.445945 kubelet[3363]: E0117 00:01:53.445903 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.446519 kubelet[3363]: E0117 00:01:53.446504 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.446683 kubelet[3363]: W0117 00:01:53.446624 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.446683 kubelet[3363]: E0117 00:01:53.446642 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.447118 kubelet[3363]: E0117 00:01:53.446914 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.447118 kubelet[3363]: W0117 00:01:53.446926 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.447118 kubelet[3363]: E0117 00:01:53.446939 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.448447 kubelet[3363]: E0117 00:01:53.448273 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.448447 kubelet[3363]: W0117 00:01:53.448290 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.448447 kubelet[3363]: E0117 00:01:53.448303 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.448878 kubelet[3363]: E0117 00:01:53.448795 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.448878 kubelet[3363]: W0117 00:01:53.448807 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.448878 kubelet[3363]: E0117 00:01:53.448819 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.449150 kubelet[3363]: E0117 00:01:53.449095 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.449150 kubelet[3363]: W0117 00:01:53.449106 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.449150 kubelet[3363]: E0117 00:01:53.449116 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.449477 kubelet[3363]: E0117 00:01:53.449419 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.449477 kubelet[3363]: W0117 00:01:53.449430 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.449477 kubelet[3363]: E0117 00:01:53.449440 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.450097 kubelet[3363]: E0117 00:01:53.449887 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.450097 kubelet[3363]: W0117 00:01:53.449901 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.450097 kubelet[3363]: E0117 00:01:53.449958 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.450538 kubelet[3363]: E0117 00:01:53.450475 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.450538 kubelet[3363]: W0117 00:01:53.450488 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.450538 kubelet[3363]: E0117 00:01:53.450500 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.450971 kubelet[3363]: E0117 00:01:53.450850 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.450971 kubelet[3363]: W0117 00:01:53.450866 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.450971 kubelet[3363]: E0117 00:01:53.450879 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.451737 kubelet[3363]: E0117 00:01:53.451560 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.451737 kubelet[3363]: W0117 00:01:53.451604 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.451737 kubelet[3363]: E0117 00:01:53.451617 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.452238 kubelet[3363]: E0117 00:01:53.452087 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.452238 kubelet[3363]: W0117 00:01:53.452099 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.452238 kubelet[3363]: E0117 00:01:53.452122 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.453434 containerd[1868]: time="2026-01-17T00:01:53.453403425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5658b4bcd-6nqz5,Uid:4f56e22f-e115-4231-973c-ca1251b74ac2,Namespace:calico-system,Attempt:0,} returns sandbox id \"26ed376d78b1bd6279d58cf310fb654a82a8e473eedacaf3a87977a6972215a9\"" Jan 17 00:01:53.454045 kubelet[3363]: E0117 00:01:53.453900 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.454045 kubelet[3363]: W0117 00:01:53.453917 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.454045 kubelet[3363]: E0117 00:01:53.453931 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.454045 kubelet[3363]: I0117 00:01:53.453952 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e5fabfab-a45e-49bd-b3b5-28097628ac44-kubelet-dir\") pod \"csi-node-driver-6zk6p\" (UID: \"e5fabfab-a45e-49bd-b3b5-28097628ac44\") " pod="calico-system/csi-node-driver-6zk6p" Jan 17 00:01:53.457214 containerd[1868]: time="2026-01-17T00:01:53.456382705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 17 00:01:53.457446 kubelet[3363]: E0117 00:01:53.457317 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.457446 kubelet[3363]: W0117 00:01:53.457330 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.457446 kubelet[3363]: E0117 00:01:53.457348 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.457446 kubelet[3363]: I0117 00:01:53.457371 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e5fabfab-a45e-49bd-b3b5-28097628ac44-varrun\") pod \"csi-node-driver-6zk6p\" (UID: \"e5fabfab-a45e-49bd-b3b5-28097628ac44\") " pod="calico-system/csi-node-driver-6zk6p" Jan 17 00:01:53.458910 kubelet[3363]: E0117 00:01:53.458471 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.458910 kubelet[3363]: W0117 00:01:53.458486 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.458910 kubelet[3363]: E0117 00:01:53.458577 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.459751 kubelet[3363]: E0117 00:01:53.459305 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.459751 kubelet[3363]: W0117 00:01:53.459318 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.459856 kubelet[3363]: E0117 00:01:53.459835 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.460135 kubelet[3363]: E0117 00:01:53.460115 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.460135 kubelet[3363]: W0117 00:01:53.460133 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.460385 kubelet[3363]: E0117 00:01:53.460216 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.460536 kubelet[3363]: I0117 00:01:53.460255 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e5fabfab-a45e-49bd-b3b5-28097628ac44-registration-dir\") pod \"csi-node-driver-6zk6p\" (UID: \"e5fabfab-a45e-49bd-b3b5-28097628ac44\") " pod="calico-system/csi-node-driver-6zk6p" Jan 17 00:01:53.462925 kubelet[3363]: E0117 00:01:53.462898 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.463269 kubelet[3363]: W0117 00:01:53.463207 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.463269 kubelet[3363]: E0117 00:01:53.463256 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.463618 kubelet[3363]: E0117 00:01:53.463598 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.463618 kubelet[3363]: W0117 00:01:53.463614 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.463746 kubelet[3363]: E0117 00:01:53.463712 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.463910 kubelet[3363]: E0117 00:01:53.463896 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.463910 kubelet[3363]: W0117 00:01:53.463907 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.464061 kubelet[3363]: E0117 00:01:53.464000 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.464061 kubelet[3363]: I0117 00:01:53.464025 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e5fabfab-a45e-49bd-b3b5-28097628ac44-socket-dir\") pod \"csi-node-driver-6zk6p\" (UID: \"e5fabfab-a45e-49bd-b3b5-28097628ac44\") " pod="calico-system/csi-node-driver-6zk6p" Jan 17 00:01:53.464206 kubelet[3363]: E0117 00:01:53.464191 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.464206 kubelet[3363]: W0117 00:01:53.464204 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.464374 kubelet[3363]: E0117 00:01:53.464216 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.464481 kubelet[3363]: E0117 00:01:53.464466 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.464481 kubelet[3363]: W0117 00:01:53.464479 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.464545 kubelet[3363]: E0117 00:01:53.464493 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.464710 kubelet[3363]: E0117 00:01:53.464696 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.464710 kubelet[3363]: W0117 00:01:53.464709 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.464771 kubelet[3363]: E0117 00:01:53.464730 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.464881 kubelet[3363]: I0117 00:01:53.464815 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqshz\" (UniqueName: \"kubernetes.io/projected/e5fabfab-a45e-49bd-b3b5-28097628ac44-kube-api-access-zqshz\") pod \"csi-node-driver-6zk6p\" (UID: \"e5fabfab-a45e-49bd-b3b5-28097628ac44\") " pod="calico-system/csi-node-driver-6zk6p" Jan 17 00:01:53.464970 kubelet[3363]: E0117 00:01:53.464953 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.465013 kubelet[3363]: W0117 00:01:53.464968 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.465013 kubelet[3363]: E0117 00:01:53.464995 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.465244 kubelet[3363]: E0117 00:01:53.465229 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.465244 kubelet[3363]: W0117 00:01:53.465242 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.465316 kubelet[3363]: E0117 00:01:53.465268 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.465459 kubelet[3363]: E0117 00:01:53.465445 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.465459 kubelet[3363]: W0117 00:01:53.465457 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.465526 kubelet[3363]: E0117 00:01:53.465466 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.465669 kubelet[3363]: E0117 00:01:53.465656 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.465669 kubelet[3363]: W0117 00:01:53.465667 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.465783 kubelet[3363]: E0117 00:01:53.465675 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.518185 containerd[1868]: time="2026-01-17T00:01:53.518126894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d6jzl,Uid:428685a7-80ab-43cd-996b-f87b6005c042,Namespace:calico-system,Attempt:0,}" Jan 17 00:01:53.565283 kubelet[3363]: E0117 00:01:53.565257 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.565565 kubelet[3363]: W0117 00:01:53.565414 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.565565 kubelet[3363]: E0117 00:01:53.565437 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.565834 kubelet[3363]: E0117 00:01:53.565656 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.565834 kubelet[3363]: W0117 00:01:53.565665 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.565834 kubelet[3363]: E0117 00:01:53.565684 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.566072 kubelet[3363]: E0117 00:01:53.565952 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.566072 kubelet[3363]: W0117 00:01:53.565963 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.566072 kubelet[3363]: E0117 00:01:53.565973 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.566357 kubelet[3363]: E0117 00:01:53.566209 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.566357 kubelet[3363]: W0117 00:01:53.566219 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.567009 kubelet[3363]: E0117 00:01:53.566263 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.567009 kubelet[3363]: E0117 00:01:53.566922 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.567009 kubelet[3363]: W0117 00:01:53.566939 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.567009 kubelet[3363]: E0117 00:01:53.566956 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.567304 kubelet[3363]: E0117 00:01:53.567267 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.567304 kubelet[3363]: W0117 00:01:53.567301 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.567391 kubelet[3363]: E0117 00:01:53.567322 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.568398 kubelet[3363]: E0117 00:01:53.568320 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.568398 kubelet[3363]: W0117 00:01:53.568336 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.568398 kubelet[3363]: E0117 00:01:53.568364 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.568753 kubelet[3363]: E0117 00:01:53.568734 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.568753 kubelet[3363]: W0117 00:01:53.568751 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.568884 kubelet[3363]: E0117 00:01:53.568810 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.569043 kubelet[3363]: E0117 00:01:53.569028 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.569043 kubelet[3363]: W0117 00:01:53.569041 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.569157 kubelet[3363]: E0117 00:01:53.569093 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.569371 kubelet[3363]: E0117 00:01:53.569355 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.569371 kubelet[3363]: W0117 00:01:53.569369 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.569587 kubelet[3363]: E0117 00:01:53.569420 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.569677 kubelet[3363]: E0117 00:01:53.569658 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.569677 kubelet[3363]: W0117 00:01:53.569675 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.569866 kubelet[3363]: E0117 00:01:53.569706 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.569958 kubelet[3363]: E0117 00:01:53.569943 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.569958 kubelet[3363]: W0117 00:01:53.569957 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.570095 kubelet[3363]: E0117 00:01:53.570010 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.570269 kubelet[3363]: E0117 00:01:53.570255 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.570269 kubelet[3363]: W0117 00:01:53.570268 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.570395 kubelet[3363]: E0117 00:01:53.570318 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.570540 kubelet[3363]: E0117 00:01:53.570524 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.570540 kubelet[3363]: W0117 00:01:53.570538 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.570727 kubelet[3363]: E0117 00:01:53.570572 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.570899 kubelet[3363]: E0117 00:01:53.570883 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.570899 kubelet[3363]: W0117 00:01:53.570897 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.571010 kubelet[3363]: E0117 00:01:53.570948 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.571305 kubelet[3363]: E0117 00:01:53.571202 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.571305 kubelet[3363]: W0117 00:01:53.571304 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.571515 kubelet[3363]: E0117 00:01:53.571343 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.571627 kubelet[3363]: E0117 00:01:53.571593 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.571914 kubelet[3363]: W0117 00:01:53.571628 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.571914 kubelet[3363]: E0117 00:01:53.571656 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.572769 kubelet[3363]: E0117 00:01:53.572425 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.572769 kubelet[3363]: W0117 00:01:53.572439 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.572978 kubelet[3363]: E0117 00:01:53.572965 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.573459 containerd[1868]: time="2026-01-17T00:01:53.573251964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:01:53.573459 containerd[1868]: time="2026-01-17T00:01:53.573302524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:01:53.573459 containerd[1868]: time="2026-01-17T00:01:53.573317484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:53.573459 containerd[1868]: time="2026-01-17T00:01:53.573390884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:53.573769 kubelet[3363]: E0117 00:01:53.573591 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.573769 kubelet[3363]: W0117 00:01:53.573603 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.573769 kubelet[3363]: E0117 00:01:53.573631 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.574423 kubelet[3363]: E0117 00:01:53.573824 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.574423 kubelet[3363]: W0117 00:01:53.573834 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.574423 kubelet[3363]: E0117 00:01:53.573858 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.574423 kubelet[3363]: E0117 00:01:53.574137 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.574423 kubelet[3363]: W0117 00:01:53.574200 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.574423 kubelet[3363]: E0117 00:01:53.574236 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.575324 kubelet[3363]: E0117 00:01:53.575016 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.575324 kubelet[3363]: W0117 00:01:53.575033 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.575324 kubelet[3363]: E0117 00:01:53.575108 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.576980 kubelet[3363]: E0117 00:01:53.576960 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.576980 kubelet[3363]: W0117 00:01:53.576975 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.577132 kubelet[3363]: E0117 00:01:53.577050 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.578447 kubelet[3363]: E0117 00:01:53.578428 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.578447 kubelet[3363]: W0117 00:01:53.578442 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.578845 kubelet[3363]: E0117 00:01:53.578718 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.580014 kubelet[3363]: E0117 00:01:53.579910 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.580014 kubelet[3363]: W0117 00:01:53.579924 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.580014 kubelet[3363]: E0117 00:01:53.579937 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.586114 kubelet[3363]: E0117 00:01:53.585917 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:53.586114 kubelet[3363]: W0117 00:01:53.585938 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:53.586114 kubelet[3363]: E0117 00:01:53.585953 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:53.617538 containerd[1868]: time="2026-01-17T00:01:53.617501437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d6jzl,Uid:428685a7-80ab-43cd-996b-f87b6005c042,Namespace:calico-system,Attempt:0,} returns sandbox id \"29504e96d9050dfb7f2d61e190672d6503603b05312d31e9631892a13e602df5\"" Jan 17 00:01:54.504267 kubelet[3363]: E0117 00:01:54.502520 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6zk6p" podUID="e5fabfab-a45e-49bd-b3b5-28097628ac44" Jan 17 00:01:54.725137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount20759118.mount: Deactivated successfully. Jan 17 00:01:55.243207 containerd[1868]: time="2026-01-17T00:01:55.243143233Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:55.246588 containerd[1868]: time="2026-01-17T00:01:55.246452553Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Jan 17 00:01:55.249947 containerd[1868]: time="2026-01-17T00:01:55.249900232Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:55.254985 containerd[1868]: time="2026-01-17T00:01:55.254852671Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:55.257361 containerd[1868]: time="2026-01-17T00:01:55.256856111Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.800439526s" Jan 17 00:01:55.257361 containerd[1868]: time="2026-01-17T00:01:55.256907311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 17 00:01:55.259874 containerd[1868]: time="2026-01-17T00:01:55.258305071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 17 00:01:55.284008 containerd[1868]: time="2026-01-17T00:01:55.283873946Z" level=info msg="CreateContainer within sandbox \"26ed376d78b1bd6279d58cf310fb654a82a8e473eedacaf3a87977a6972215a9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 00:01:55.313588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount525028500.mount: Deactivated successfully. Jan 17 00:01:55.332210 containerd[1868]: time="2026-01-17T00:01:55.332093578Z" level=info msg="CreateContainer within sandbox \"26ed376d78b1bd6279d58cf310fb654a82a8e473eedacaf3a87977a6972215a9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f3875422647cdf5fdd8d00f78f580a78c5af7be58552d3ed4ecb9f8bebc86224\"" Jan 17 00:01:55.332748 containerd[1868]: time="2026-01-17T00:01:55.332722818Z" level=info msg="StartContainer for \"f3875422647cdf5fdd8d00f78f580a78c5af7be58552d3ed4ecb9f8bebc86224\"" Jan 17 00:01:55.393299 containerd[1868]: time="2026-01-17T00:01:55.393237087Z" level=info msg="StartContainer for \"f3875422647cdf5fdd8d00f78f580a78c5af7be58552d3ed4ecb9f8bebc86224\" returns successfully" Jan 17 00:01:55.651995 kubelet[3363]: I0117 00:01:55.651700 3363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5658b4bcd-6nqz5" podStartSLOduration=1.849001956 podStartE2EDuration="3.651685002s" podCreationTimestamp="2026-01-17 00:01:52 +0000 UTC" firstStartedPulling="2026-01-17 00:01:53.455375945 +0000 UTC m=+24.060681219" lastFinishedPulling="2026-01-17 00:01:55.258058991 +0000 UTC m=+25.863364265" observedRunningTime="2026-01-17 00:01:55.623454327 +0000 UTC m=+26.228759601" watchObservedRunningTime="2026-01-17 00:01:55.651685002 +0000 UTC m=+26.256990276" Jan 17 00:01:55.668919 kubelet[3363]: E0117 00:01:55.668888 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:55.668919 kubelet[3363]: W0117 00:01:55.668910 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:55.669221 kubelet[3363]: E0117 00:01:55.668931 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:55.669221 kubelet[3363]: E0117 00:01:55.669248 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:55.669221 kubelet[3363]: W0117 00:01:55.669259 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:55.669221 kubelet[3363]: E0117 00:01:55.669273 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:55.669765 kubelet[3363]: E0117 00:01:55.669587 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:55.669765 kubelet[3363]: W0117 00:01:55.669597 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:55.669765 kubelet[3363]: E0117 00:01:55.669609 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:55.670264 kubelet[3363]: E0117 00:01:55.670228 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:55.670264 kubelet[3363]: W0117 00:01:55.670247 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:55.670437 kubelet[3363]: E0117 00:01:55.670269 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:55.670493 kubelet[3363]: E0117 00:01:55.670477 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:55.670493 kubelet[3363]: W0117 00:01:55.670491 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:55.670550 kubelet[3363]: E0117 00:01:55.670502 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:55.670828 kubelet[3363]: E0117 00:01:55.670696 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:55.670828 kubelet[3363]: W0117 00:01:55.670715 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:55.670828 kubelet[3363]: E0117 00:01:55.670724 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:55.671056 kubelet[3363]: E0117 00:01:55.670908 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:55.671056 kubelet[3363]: W0117 00:01:55.670917 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:55.671056 kubelet[3363]: E0117 00:01:55.670940 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:55.671327 kubelet[3363]: E0117 00:01:55.671285 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:55.671366 kubelet[3363]: W0117 00:01:55.671345 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:55.671366 kubelet[3363]: E0117 00:01:55.671360 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:55.672396 kubelet[3363]: E0117 00:01:55.672380 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:55.672396 kubelet[3363]: W0117 00:01:55.672392 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:55.672483 kubelet[3363]: E0117 00:01:55.672403 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:55.673867 kubelet[3363]: E0117 00:01:55.673447 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:55.673867 kubelet[3363]: W0117 00:01:55.673466 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:55.673867 kubelet[3363]: E0117 00:01:55.673478 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:55.674115 kubelet[3363]: E0117 00:01:55.674099 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:55.674115 kubelet[3363]: W0117 00:01:55.674114 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:55.674230 kubelet[3363]: E0117 00:01:55.674127 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:55.675388 kubelet[3363]: E0117 00:01:55.675355 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:55.675388 kubelet[3363]: W0117 00:01:55.675372 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:55.675484 kubelet[3363]: E0117 00:01:55.675392 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:55.676604 kubelet[3363]: E0117 00:01:55.676252 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:55.676604 kubelet[3363]: W0117 00:01:55.676269 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:55.676604 kubelet[3363]: E0117 00:01:55.676290 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:55.676604 kubelet[3363]: E0117 00:01:55.676459 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:55.676604 kubelet[3363]: W0117 00:01:55.676467 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:55.676604 kubelet[3363]: E0117 00:01:55.676476 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:55.677884 kubelet[3363]: E0117 00:01:55.677720 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:55.677884 kubelet[3363]: W0117 00:01:55.677745 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:55.677884 kubelet[3363]: E0117 00:01:55.677758 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:55.682081 kubelet[3363]: E0117 00:01:55.681976 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:55.682081 kubelet[3363]: W0117 00:01:55.681989 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:55.682081 kubelet[3363]: E0117 00:01:55.682000 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:55.682368 kubelet[3363]: E0117 00:01:55.682253 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:55.682368 kubelet[3363]: W0117 00:01:55.682263 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:55.682368 kubelet[3363]: E0117 00:01:55.682279 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:55.682505 kubelet[3363]: E0117 00:01:55.682495 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:55.682556 kubelet[3363]: W0117 00:01:55.682547 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:55.682614 kubelet[3363]: E0117 00:01:55.682605 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:55.682840 kubelet[3363]: E0117 00:01:55.682822 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:55.682840 kubelet[3363]: W0117 00:01:55.682837 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:55.682915 kubelet[3363]: E0117 00:01:55.682853 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:55.683020 kubelet[3363]: E0117 00:01:55.683004 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:55.683020 kubelet[3363]: W0117 00:01:55.683016 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:55.683087 kubelet[3363]: E0117 00:01:55.683033 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:55.683332 kubelet[3363]: E0117 00:01:55.683208 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:55.683332 kubelet[3363]: W0117 00:01:55.683243 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:55.683332 kubelet[3363]: E0117 00:01:55.683258 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:55.684074 kubelet[3363]: E0117 00:01:55.683463 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:55.684074 kubelet[3363]: W0117 00:01:55.683476 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:55.684074 kubelet[3363]: E0117 00:01:55.683490 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:55.684074 kubelet[3363]: E0117 00:01:55.683973 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:55.684074 kubelet[3363]: W0117 00:01:55.683985 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:55.684074 kubelet[3363]: E0117 00:01:55.683996 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:55.684684 kubelet[3363]: E0117 00:01:55.684370 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:55.684684 kubelet[3363]: W0117 00:01:55.684385 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:55.684684 kubelet[3363]: E0117 00:01:55.684406 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:55.684684 kubelet[3363]: E0117 00:01:55.684558 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:55.684684 kubelet[3363]: W0117 00:01:55.684566 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:55.684684 kubelet[3363]: E0117 00:01:55.684619 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:55.684856 kubelet[3363]: E0117 00:01:55.684701 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:55.684856 kubelet[3363]: W0117 00:01:55.684707 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:55.684856 kubelet[3363]: E0117 00:01:55.684720 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:55.684856 kubelet[3363]: E0117 00:01:55.684849 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:55.684856 kubelet[3363]: W0117 00:01:55.684856 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:55.686305 kubelet[3363]: E0117 00:01:55.684871 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:55.686305 kubelet[3363]: E0117 00:01:55.684987 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:55.686305 kubelet[3363]: W0117 00:01:55.684993 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:55.686305 kubelet[3363]: E0117 00:01:55.685007 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:55.686305 kubelet[3363]: E0117 00:01:55.685158 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:55.686305 kubelet[3363]: W0117 00:01:55.685165 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:55.686305 kubelet[3363]: E0117 00:01:55.685828 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:55.686305 kubelet[3363]: E0117 00:01:55.686195 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:55.686305 kubelet[3363]: W0117 00:01:55.686205 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:55.686512 kubelet[3363]: E0117 00:01:55.686279 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:55.686512 kubelet[3363]: E0117 00:01:55.686422 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:55.686512 kubelet[3363]: W0117 00:01:55.686429 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:55.686512 kubelet[3363]: E0117 00:01:55.686440 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:55.687267 kubelet[3363]: E0117 00:01:55.686628 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:55.687267 kubelet[3363]: W0117 00:01:55.686640 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:55.687267 kubelet[3363]: E0117 00:01:55.686649 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:55.687267 kubelet[3363]: E0117 00:01:55.686952 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:55.687267 kubelet[3363]: W0117 00:01:55.686961 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:55.687267 kubelet[3363]: E0117 00:01:55.686971 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:56.503206 kubelet[3363]: E0117 00:01:56.503020 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6zk6p" podUID="e5fabfab-a45e-49bd-b3b5-28097628ac44" Jan 17 00:01:56.580375 containerd[1868]: time="2026-01-17T00:01:56.580325745Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:56.584049 containerd[1868]: time="2026-01-17T00:01:56.583899864Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Jan 17 00:01:56.587238 containerd[1868]: time="2026-01-17T00:01:56.587204983Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:56.593313 containerd[1868]: time="2026-01-17T00:01:56.593044340Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:56.593915 containerd[1868]: time="2026-01-17T00:01:56.593879220Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.334623549s" Jan 17 00:01:56.593915 containerd[1868]: time="2026-01-17T00:01:56.593910900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 17 00:01:56.595967 containerd[1868]: time="2026-01-17T00:01:56.595940219Z" level=info msg="CreateContainer within sandbox \"29504e96d9050dfb7f2d61e190672d6503603b05312d31e9631892a13e602df5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 00:01:56.608147 kubelet[3363]: I0117 00:01:56.608124 3363 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:01:56.639088 containerd[1868]: time="2026-01-17T00:01:56.639049481Z" level=info msg="CreateContainer within sandbox \"29504e96d9050dfb7f2d61e190672d6503603b05312d31e9631892a13e602df5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fb42fa9387c97a361b03b8310d9bae9fd54f94e1fb19471250dcc1e428f0d2da\"" Jan 17 00:01:56.639782 containerd[1868]: time="2026-01-17T00:01:56.639757041Z" level=info msg="StartContainer for \"fb42fa9387c97a361b03b8310d9bae9fd54f94e1fb19471250dcc1e428f0d2da\"" Jan 17 00:01:56.685161 kubelet[3363]: E0117 00:01:56.685052 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:56.685161 kubelet[3363]: W0117 00:01:56.685072 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:56.685161 kubelet[3363]: E0117 00:01:56.685091 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:56.685850 kubelet[3363]: E0117 00:01:56.685573 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:56.685850 kubelet[3363]: W0117 00:01:56.685585 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:56.685850 kubelet[3363]: E0117 00:01:56.685596 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:56.685850 kubelet[3363]: E0117 00:01:56.685757 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:56.685850 kubelet[3363]: W0117 00:01:56.685766 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:56.685850 kubelet[3363]: E0117 00:01:56.685775 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:56.686389 kubelet[3363]: E0117 00:01:56.685935 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:56.686389 kubelet[3363]: W0117 00:01:56.685944 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:56.686389 kubelet[3363]: E0117 00:01:56.685953 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:56.686599 kubelet[3363]: E0117 00:01:56.686504 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:56.686599 kubelet[3363]: W0117 00:01:56.686517 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:56.686599 kubelet[3363]: E0117 00:01:56.686528 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:56.686743 kubelet[3363]: E0117 00:01:56.686733 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:56.687077 kubelet[3363]: W0117 00:01:56.686928 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:56.687077 kubelet[3363]: E0117 00:01:56.686948 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:56.687355 kubelet[3363]: E0117 00:01:56.687341 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:56.687499 kubelet[3363]: W0117 00:01:56.687410 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:56.687499 kubelet[3363]: E0117 00:01:56.687426 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:56.687678 kubelet[3363]: E0117 00:01:56.687668 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:56.687837 kubelet[3363]: W0117 00:01:56.687746 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:56.687837 kubelet[3363]: E0117 00:01:56.687759 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:56.688066 kubelet[3363]: E0117 00:01:56.687992 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:56.688066 kubelet[3363]: W0117 00:01:56.688003 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:56.688066 kubelet[3363]: E0117 00:01:56.688013 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:56.688386 kubelet[3363]: E0117 00:01:56.688309 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:56.688386 kubelet[3363]: W0117 00:01:56.688320 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:56.688386 kubelet[3363]: E0117 00:01:56.688329 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:56.688648 kubelet[3363]: E0117 00:01:56.688574 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:56.688648 kubelet[3363]: W0117 00:01:56.688583 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:56.688648 kubelet[3363]: E0117 00:01:56.688592 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:56.689048 kubelet[3363]: E0117 00:01:56.688829 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:56.689048 kubelet[3363]: W0117 00:01:56.688838 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:56.689048 kubelet[3363]: E0117 00:01:56.688847 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:56.689341 kubelet[3363]: E0117 00:01:56.689226 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:56.689341 kubelet[3363]: W0117 00:01:56.689238 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:56.689341 kubelet[3363]: E0117 00:01:56.689247 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:56.689886 kubelet[3363]: E0117 00:01:56.689663 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:56.689886 kubelet[3363]: W0117 00:01:56.689693 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:56.689886 kubelet[3363]: E0117 00:01:56.689704 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:56.690392 kubelet[3363]: E0117 00:01:56.690142 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:56.690392 kubelet[3363]: W0117 00:01:56.690213 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:56.690392 kubelet[3363]: E0117 00:01:56.690225 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:56.690905 kubelet[3363]: E0117 00:01:56.690725 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:56.690905 kubelet[3363]: W0117 00:01:56.690752 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:56.690905 kubelet[3363]: E0117 00:01:56.690764 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:56.691412 kubelet[3363]: E0117 00:01:56.691207 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:56.691412 kubelet[3363]: W0117 00:01:56.691232 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:56.691412 kubelet[3363]: E0117 00:01:56.691244 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:56.691922 kubelet[3363]: E0117 00:01:56.691761 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:56.691922 kubelet[3363]: W0117 00:01:56.691774 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:56.691922 kubelet[3363]: E0117 00:01:56.691871 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:56.692417 kubelet[3363]: E0117 00:01:56.692400 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:56.692417 kubelet[3363]: W0117 00:01:56.692415 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:56.692665 kubelet[3363]: E0117 00:01:56.692431 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:56.693418 kubelet[3363]: E0117 00:01:56.693392 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:56.693418 kubelet[3363]: W0117 00:01:56.693407 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:56.693632 kubelet[3363]: E0117 00:01:56.693423 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:56.693986 kubelet[3363]: E0117 00:01:56.693969 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:56.693986 kubelet[3363]: W0117 00:01:56.693982 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:56.696101 kubelet[3363]: E0117 00:01:56.694069 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:56.696101 kubelet[3363]: E0117 00:01:56.694411 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:56.696101 kubelet[3363]: W0117 00:01:56.694426 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:56.696101 kubelet[3363]: E0117 00:01:56.694450 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:56.696101 kubelet[3363]: E0117 00:01:56.694581 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:56.696101 kubelet[3363]: W0117 00:01:56.694589 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:56.696101 kubelet[3363]: E0117 00:01:56.694607 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:56.696101 kubelet[3363]: E0117 00:01:56.694757 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:56.696101 kubelet[3363]: W0117 00:01:56.694764 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:56.696101 kubelet[3363]: E0117 00:01:56.694777 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:56.696839 kubelet[3363]: E0117 00:01:56.694960 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:56.696839 kubelet[3363]: W0117 00:01:56.694972 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:56.696839 kubelet[3363]: E0117 00:01:56.694983 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:56.696839 kubelet[3363]: E0117 00:01:56.695158 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:56.696839 kubelet[3363]: W0117 00:01:56.695167 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:56.696839 kubelet[3363]: E0117 00:01:56.695209 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:56.696839 kubelet[3363]: E0117 00:01:56.695406 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:56.696839 kubelet[3363]: W0117 00:01:56.695416 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:56.696839 kubelet[3363]: E0117 00:01:56.695428 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:56.696839 kubelet[3363]: E0117 00:01:56.695728 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:56.697053 kubelet[3363]: W0117 00:01:56.695739 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:56.697053 kubelet[3363]: E0117 00:01:56.695748 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:56.697053 kubelet[3363]: E0117 00:01:56.695882 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:56.697053 kubelet[3363]: W0117 00:01:56.695890 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:56.697053 kubelet[3363]: E0117 00:01:56.695903 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:56.697053 kubelet[3363]: E0117 00:01:56.696319 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:56.697053 kubelet[3363]: W0117 00:01:56.696334 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:56.697053 kubelet[3363]: E0117 00:01:56.696513 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:56.697478 kubelet[3363]: E0117 00:01:56.697366 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:56.697478 kubelet[3363]: W0117 00:01:56.697379 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:56.697478 kubelet[3363]: E0117 00:01:56.697390 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:56.697933 kubelet[3363]: E0117 00:01:56.697570 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:56.697933 kubelet[3363]: W0117 00:01:56.697579 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:56.697933 kubelet[3363]: E0117 00:01:56.697587 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:56.698118 kubelet[3363]: E0117 00:01:56.698063 3363 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:01:56.698118 kubelet[3363]: W0117 00:01:56.698075 3363 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:01:56.698118 kubelet[3363]: E0117 00:01:56.698091 3363 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:01:56.698229 containerd[1868]: time="2026-01-17T00:01:56.698165617Z" level=info msg="StartContainer for \"fb42fa9387c97a361b03b8310d9bae9fd54f94e1fb19471250dcc1e428f0d2da\" returns successfully" Jan 17 00:01:57.265561 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb42fa9387c97a361b03b8310d9bae9fd54f94e1fb19471250dcc1e428f0d2da-rootfs.mount: Deactivated successfully. Jan 17 00:01:57.763211 containerd[1868]: time="2026-01-17T00:01:57.763144224Z" level=info msg="shim disconnected" id=fb42fa9387c97a361b03b8310d9bae9fd54f94e1fb19471250dcc1e428f0d2da namespace=k8s.io Jan 17 00:01:57.763663 containerd[1868]: time="2026-01-17T00:01:57.763644224Z" level=warning msg="cleaning up after shim disconnected" id=fb42fa9387c97a361b03b8310d9bae9fd54f94e1fb19471250dcc1e428f0d2da namespace=k8s.io Jan 17 00:01:57.763729 containerd[1868]: time="2026-01-17T00:01:57.763716944Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:01:58.503213 kubelet[3363]: E0117 00:01:58.502876 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6zk6p" podUID="e5fabfab-a45e-49bd-b3b5-28097628ac44" Jan 17 00:01:58.614567 containerd[1868]: time="2026-01-17T00:01:58.614300197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 17 00:01:59.220533 kubelet[3363]: I0117 00:01:59.220182 3363 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:02:00.502686 kubelet[3363]: E0117 00:02:00.502636 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6zk6p" podUID="e5fabfab-a45e-49bd-b3b5-28097628ac44" Jan 17 00:02:00.884929 containerd[1868]: time="2026-01-17T00:02:00.884268033Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:00.887103 containerd[1868]: time="2026-01-17T00:02:00.887059392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 17 00:02:00.890582 containerd[1868]: time="2026-01-17T00:02:00.890543991Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:00.897107 containerd[1868]: time="2026-01-17T00:02:00.897068268Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:00.897792 containerd[1868]: time="2026-01-17T00:02:00.897679508Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.283240831s" Jan 17 00:02:00.897792 containerd[1868]: time="2026-01-17T00:02:00.897709508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 17 00:02:00.899950 containerd[1868]: time="2026-01-17T00:02:00.899813187Z" level=info msg="CreateContainer within sandbox \"29504e96d9050dfb7f2d61e190672d6503603b05312d31e9631892a13e602df5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 00:02:00.957751 containerd[1868]: time="2026-01-17T00:02:00.957676163Z" level=info msg="CreateContainer within sandbox \"29504e96d9050dfb7f2d61e190672d6503603b05312d31e9631892a13e602df5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e496b2086f9fe0a27a8a2dc7f249edf5d6f4dee3c4c714caec7048f94d35602f\"" Jan 17 00:02:00.958548 containerd[1868]: time="2026-01-17T00:02:00.958153643Z" level=info msg="StartContainer for \"e496b2086f9fe0a27a8a2dc7f249edf5d6f4dee3c4c714caec7048f94d35602f\"" Jan 17 00:02:01.016636 containerd[1868]: time="2026-01-17T00:02:01.016602980Z" level=info msg="StartContainer for \"e496b2086f9fe0a27a8a2dc7f249edf5d6f4dee3c4c714caec7048f94d35602f\" returns successfully" Jan 17 00:02:02.158445 containerd[1868]: time="2026-01-17T00:02:02.158300741Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:02:02.190122 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e496b2086f9fe0a27a8a2dc7f249edf5d6f4dee3c4c714caec7048f94d35602f-rootfs.mount: Deactivated successfully. Jan 17 00:02:02.266333 kubelet[3363]: I0117 00:02:02.266305 3363 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 00:02:02.329982 kubelet[3363]: I0117 00:02:02.328922 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7e6f1f60-5b4d-4f6a-92be-ef48b02574bd-calico-apiserver-certs\") pod \"calico-apiserver-7cb7f6dddc-5gx8p\" (UID: \"7e6f1f60-5b4d-4f6a-92be-ef48b02574bd\") " pod="calico-apiserver/calico-apiserver-7cb7f6dddc-5gx8p" Jan 17 00:02:02.329982 kubelet[3363]: I0117 00:02:02.328957 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26750179-e403-4a5c-a534-2a8e795f3838-config-volume\") pod \"coredns-668d6bf9bc-lpdqp\" (UID: \"26750179-e403-4a5c-a534-2a8e795f3838\") " pod="kube-system/coredns-668d6bf9bc-lpdqp" Jan 17 00:02:02.329982 kubelet[3363]: I0117 00:02:02.328975 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2966s\" (UniqueName: \"kubernetes.io/projected/26750179-e403-4a5c-a534-2a8e795f3838-kube-api-access-2966s\") pod \"coredns-668d6bf9bc-lpdqp\" (UID: \"26750179-e403-4a5c-a534-2a8e795f3838\") " pod="kube-system/coredns-668d6bf9bc-lpdqp" Jan 17 00:02:02.329982 kubelet[3363]: I0117 00:02:02.328994 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2h2r\" (UniqueName: \"kubernetes.io/projected/d6662ed8-4409-4f39-bb3b-ba711a87545b-kube-api-access-c2h2r\") pod \"calico-kube-controllers-854949db7b-nkqdw\" (UID: \"d6662ed8-4409-4f39-bb3b-ba711a87545b\") " pod="calico-system/calico-kube-controllers-854949db7b-nkqdw" Jan 17 00:02:02.329982 kubelet[3363]: I0117 00:02:02.329014 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdr7c\" (UniqueName: \"kubernetes.io/projected/7e6f1f60-5b4d-4f6a-92be-ef48b02574bd-kube-api-access-sdr7c\") pod \"calico-apiserver-7cb7f6dddc-5gx8p\" (UID: \"7e6f1f60-5b4d-4f6a-92be-ef48b02574bd\") " pod="calico-apiserver/calico-apiserver-7cb7f6dddc-5gx8p" Jan 17 00:02:02.330595 kubelet[3363]: I0117 00:02:02.329034 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecd1bf71-b900-4076-a4ef-6e1fdb271e0b-whisker-ca-bundle\") pod \"whisker-5c86fff86c-nvh7r\" (UID: \"ecd1bf71-b900-4076-a4ef-6e1fdb271e0b\") " pod="calico-system/whisker-5c86fff86c-nvh7r" Jan 17 00:02:02.330595 kubelet[3363]: I0117 00:02:02.329052 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ecd1bf71-b900-4076-a4ef-6e1fdb271e0b-whisker-backend-key-pair\") pod \"whisker-5c86fff86c-nvh7r\" (UID: \"ecd1bf71-b900-4076-a4ef-6e1fdb271e0b\") " pod="calico-system/whisker-5c86fff86c-nvh7r" Jan 17 00:02:02.330595 kubelet[3363]: I0117 00:02:02.329068 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6662ed8-4409-4f39-bb3b-ba711a87545b-tigera-ca-bundle\") pod \"calico-kube-controllers-854949db7b-nkqdw\" (UID: \"d6662ed8-4409-4f39-bb3b-ba711a87545b\") " pod="calico-system/calico-kube-controllers-854949db7b-nkqdw" Jan 17 00:02:02.330595 kubelet[3363]: I0117 00:02:02.329085 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdrxs\" (UniqueName: \"kubernetes.io/projected/ecd1bf71-b900-4076-a4ef-6e1fdb271e0b-kube-api-access-mdrxs\") pod \"whisker-5c86fff86c-nvh7r\" (UID: \"ecd1bf71-b900-4076-a4ef-6e1fdb271e0b\") " pod="calico-system/whisker-5c86fff86c-nvh7r" Jan 17 00:02:03.010108 kubelet[3363]: I0117 00:02:02.429821 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mg8s\" (UniqueName: \"kubernetes.io/projected/6139bf28-5324-4c65-a1a9-809ea0e0b5cf-kube-api-access-2mg8s\") pod \"calico-apiserver-7cb7f6dddc-pk2n6\" (UID: \"6139bf28-5324-4c65-a1a9-809ea0e0b5cf\") " pod="calico-apiserver/calico-apiserver-7cb7f6dddc-pk2n6" Jan 17 00:02:03.010108 kubelet[3363]: I0117 00:02:02.429864 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fcd9ca75-edcd-4265-90e0-090e79b4eb07-config-volume\") pod \"coredns-668d6bf9bc-pvlhk\" (UID: \"fcd9ca75-edcd-4265-90e0-090e79b4eb07\") " pod="kube-system/coredns-668d6bf9bc-pvlhk" Jan 17 00:02:03.010108 kubelet[3363]: I0117 00:02:02.429899 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7edf5fdf-55d5-4ab4-bb24-67c10b2d9654-config\") pod \"goldmane-666569f655-gjjfd\" (UID: \"7edf5fdf-55d5-4ab4-bb24-67c10b2d9654\") " pod="calico-system/goldmane-666569f655-gjjfd" Jan 17 00:02:03.010108 kubelet[3363]: I0117 00:02:02.429928 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph8gt\" (UniqueName: \"kubernetes.io/projected/7edf5fdf-55d5-4ab4-bb24-67c10b2d9654-kube-api-access-ph8gt\") pod \"goldmane-666569f655-gjjfd\" (UID: \"7edf5fdf-55d5-4ab4-bb24-67c10b2d9654\") " pod="calico-system/goldmane-666569f655-gjjfd" Jan 17 00:02:03.010108 kubelet[3363]: I0117 00:02:02.429945 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6139bf28-5324-4c65-a1a9-809ea0e0b5cf-calico-apiserver-certs\") pod \"calico-apiserver-7cb7f6dddc-pk2n6\" (UID: \"6139bf28-5324-4c65-a1a9-809ea0e0b5cf\") " pod="calico-apiserver/calico-apiserver-7cb7f6dddc-pk2n6" Jan 17 00:02:03.010447 kubelet[3363]: I0117 00:02:02.429972 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7edf5fdf-55d5-4ab4-bb24-67c10b2d9654-goldmane-ca-bundle\") pod \"goldmane-666569f655-gjjfd\" (UID: \"7edf5fdf-55d5-4ab4-bb24-67c10b2d9654\") " pod="calico-system/goldmane-666569f655-gjjfd" Jan 17 00:02:03.010447 kubelet[3363]: I0117 00:02:02.430001 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7edf5fdf-55d5-4ab4-bb24-67c10b2d9654-goldmane-key-pair\") pod \"goldmane-666569f655-gjjfd\" (UID: \"7edf5fdf-55d5-4ab4-bb24-67c10b2d9654\") " pod="calico-system/goldmane-666569f655-gjjfd" Jan 17 00:02:03.010447 kubelet[3363]: I0117 00:02:02.430051 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5zcl\" (UniqueName: \"kubernetes.io/projected/fcd9ca75-edcd-4265-90e0-090e79b4eb07-kube-api-access-x5zcl\") pod \"coredns-668d6bf9bc-pvlhk\" (UID: \"fcd9ca75-edcd-4265-90e0-090e79b4eb07\") " pod="kube-system/coredns-668d6bf9bc-pvlhk" Jan 17 00:02:03.010539 containerd[1868]: time="2026-01-17T00:02:03.010131648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6zk6p,Uid:e5fabfab-a45e-49bd-b3b5-28097628ac44,Namespace:calico-system,Attempt:0,}" Jan 17 00:02:03.036382 containerd[1868]: time="2026-01-17T00:02:03.035755924Z" level=info msg="shim disconnected" id=e496b2086f9fe0a27a8a2dc7f249edf5d6f4dee3c4c714caec7048f94d35602f namespace=k8s.io Jan 17 00:02:03.036382 containerd[1868]: time="2026-01-17T00:02:03.035805844Z" level=warning msg="cleaning up after shim disconnected" id=e496b2086f9fe0a27a8a2dc7f249edf5d6f4dee3c4c714caec7048f94d35602f namespace=k8s.io Jan 17 00:02:03.036382 containerd[1868]: time="2026-01-17T00:02:03.035814204Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:02:03.098570 containerd[1868]: time="2026-01-17T00:02:03.098396474Z" level=error msg="Failed to destroy network for sandbox \"e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.099107 containerd[1868]: time="2026-01-17T00:02:03.098836074Z" level=error msg="encountered an error cleaning up failed sandbox \"e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.099107 containerd[1868]: time="2026-01-17T00:02:03.098891114Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6zk6p,Uid:e5fabfab-a45e-49bd-b3b5-28097628ac44,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.100859 kubelet[3363]: E0117 00:02:03.099206 3363 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.100859 kubelet[3363]: E0117 00:02:03.099286 3363 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6zk6p" Jan 17 00:02:03.100859 kubelet[3363]: E0117 00:02:03.099304 3363 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6zk6p" Jan 17 00:02:03.100960 kubelet[3363]: E0117 00:02:03.099347 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6zk6p_calico-system(e5fabfab-a45e-49bd-b3b5-28097628ac44)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6zk6p_calico-system(e5fabfab-a45e-49bd-b3b5-28097628ac44)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6zk6p" podUID="e5fabfab-a45e-49bd-b3b5-28097628ac44" Jan 17 00:02:03.214217 containerd[1868]: time="2026-01-17T00:02:03.214178456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c86fff86c-nvh7r,Uid:ecd1bf71-b900-4076-a4ef-6e1fdb271e0b,Namespace:calico-system,Attempt:0,}" Jan 17 00:02:03.225533 containerd[1868]: time="2026-01-17T00:02:03.225250214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cb7f6dddc-5gx8p,Uid:7e6f1f60-5b4d-4f6a-92be-ef48b02574bd,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:02:03.237619 containerd[1868]: time="2026-01-17T00:02:03.237591773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lpdqp,Uid:26750179-e403-4a5c-a534-2a8e795f3838,Namespace:kube-system,Attempt:0,}" Jan 17 00:02:03.242525 containerd[1868]: time="2026-01-17T00:02:03.242501692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-854949db7b-nkqdw,Uid:d6662ed8-4409-4f39-bb3b-ba711a87545b,Namespace:calico-system,Attempt:0,}" Jan 17 00:02:03.257499 containerd[1868]: time="2026-01-17T00:02:03.257320809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-gjjfd,Uid:7edf5fdf-55d5-4ab4-bb24-67c10b2d9654,Namespace:calico-system,Attempt:0,}" Jan 17 00:02:03.261599 containerd[1868]: time="2026-01-17T00:02:03.260326089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pvlhk,Uid:fcd9ca75-edcd-4265-90e0-090e79b4eb07,Namespace:kube-system,Attempt:0,}" Jan 17 00:02:03.265987 containerd[1868]: time="2026-01-17T00:02:03.265739448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cb7f6dddc-pk2n6,Uid:6139bf28-5324-4c65-a1a9-809ea0e0b5cf,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:02:03.337985 containerd[1868]: time="2026-01-17T00:02:03.337917837Z" level=error msg="Failed to destroy network for sandbox \"681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.338696 containerd[1868]: time="2026-01-17T00:02:03.338553197Z" level=error msg="encountered an error cleaning up failed sandbox \"681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.338696 containerd[1868]: time="2026-01-17T00:02:03.338610917Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c86fff86c-nvh7r,Uid:ecd1bf71-b900-4076-a4ef-6e1fdb271e0b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.340567 kubelet[3363]: E0117 00:02:03.340271 3363 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.340567 kubelet[3363]: E0117 00:02:03.340324 3363 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c86fff86c-nvh7r" Jan 17 00:02:03.340567 kubelet[3363]: E0117 00:02:03.340341 3363 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c86fff86c-nvh7r" Jan 17 00:02:03.340908 kubelet[3363]: E0117 00:02:03.340376 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5c86fff86c-nvh7r_calico-system(ecd1bf71-b900-4076-a4ef-6e1fdb271e0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5c86fff86c-nvh7r_calico-system(ecd1bf71-b900-4076-a4ef-6e1fdb271e0b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5c86fff86c-nvh7r" podUID="ecd1bf71-b900-4076-a4ef-6e1fdb271e0b" Jan 17 00:02:03.341474 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c-shm.mount: Deactivated successfully. Jan 17 00:02:03.523012 containerd[1868]: time="2026-01-17T00:02:03.522821248Z" level=error msg="Failed to destroy network for sandbox \"3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.524906 containerd[1868]: time="2026-01-17T00:02:03.523374528Z" level=error msg="encountered an error cleaning up failed sandbox \"3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.524906 containerd[1868]: time="2026-01-17T00:02:03.523420608Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cb7f6dddc-pk2n6,Uid:6139bf28-5324-4c65-a1a9-809ea0e0b5cf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.525042 kubelet[3363]: E0117 00:02:03.523609 3363 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.525042 kubelet[3363]: E0117 00:02:03.523667 3363 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cb7f6dddc-pk2n6" Jan 17 00:02:03.525042 kubelet[3363]: E0117 00:02:03.523687 3363 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cb7f6dddc-pk2n6" Jan 17 00:02:03.525139 kubelet[3363]: E0117 00:02:03.523724 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cb7f6dddc-pk2n6_calico-apiserver(6139bf28-5324-4c65-a1a9-809ea0e0b5cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cb7f6dddc-pk2n6_calico-apiserver(6139bf28-5324-4c65-a1a9-809ea0e0b5cf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cb7f6dddc-pk2n6" podUID="6139bf28-5324-4c65-a1a9-809ea0e0b5cf" Jan 17 00:02:03.530080 containerd[1868]: time="2026-01-17T00:02:03.529912167Z" level=error msg="Failed to destroy network for sandbox \"fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.531118 containerd[1868]: time="2026-01-17T00:02:03.531082247Z" level=error msg="encountered an error cleaning up failed sandbox \"fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.531214 containerd[1868]: time="2026-01-17T00:02:03.531133447Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cb7f6dddc-5gx8p,Uid:7e6f1f60-5b4d-4f6a-92be-ef48b02574bd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.533440 kubelet[3363]: E0117 00:02:03.531364 3363 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.533440 kubelet[3363]: E0117 00:02:03.531413 3363 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cb7f6dddc-5gx8p" Jan 17 00:02:03.533440 kubelet[3363]: E0117 00:02:03.531434 3363 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cb7f6dddc-5gx8p" Jan 17 00:02:03.533592 kubelet[3363]: E0117 00:02:03.531467 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cb7f6dddc-5gx8p_calico-apiserver(7e6f1f60-5b4d-4f6a-92be-ef48b02574bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cb7f6dddc-5gx8p_calico-apiserver(7e6f1f60-5b4d-4f6a-92be-ef48b02574bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cb7f6dddc-5gx8p" podUID="7e6f1f60-5b4d-4f6a-92be-ef48b02574bd" Jan 17 00:02:03.554020 containerd[1868]: time="2026-01-17T00:02:03.553967763Z" level=error msg="Failed to destroy network for sandbox \"d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.555039 containerd[1868]: time="2026-01-17T00:02:03.554635203Z" level=error msg="encountered an error cleaning up failed sandbox \"d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.555113 containerd[1868]: time="2026-01-17T00:02:03.555064603Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lpdqp,Uid:26750179-e403-4a5c-a534-2a8e795f3838,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.555669 kubelet[3363]: E0117 00:02:03.555268 3363 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.555669 kubelet[3363]: E0117 00:02:03.555322 3363 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lpdqp" Jan 17 00:02:03.555669 kubelet[3363]: E0117 00:02:03.555339 3363 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lpdqp" Jan 17 00:02:03.556318 kubelet[3363]: E0117 00:02:03.555380 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lpdqp_kube-system(26750179-e403-4a5c-a534-2a8e795f3838)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lpdqp_kube-system(26750179-e403-4a5c-a534-2a8e795f3838)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lpdqp" podUID="26750179-e403-4a5c-a534-2a8e795f3838" Jan 17 00:02:03.571896 containerd[1868]: time="2026-01-17T00:02:03.571832880Z" level=error msg="Failed to destroy network for sandbox \"a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.572790 containerd[1868]: time="2026-01-17T00:02:03.572753560Z" level=error msg="encountered an error cleaning up failed sandbox \"a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.572914 containerd[1868]: time="2026-01-17T00:02:03.572805880Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-gjjfd,Uid:7edf5fdf-55d5-4ab4-bb24-67c10b2d9654,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.574730 containerd[1868]: time="2026-01-17T00:02:03.573268600Z" level=error msg="Failed to destroy network for sandbox \"00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.574730 containerd[1868]: time="2026-01-17T00:02:03.573646360Z" level=error msg="encountered an error cleaning up failed sandbox \"00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.574730 containerd[1868]: time="2026-01-17T00:02:03.573695040Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-854949db7b-nkqdw,Uid:d6662ed8-4409-4f39-bb3b-ba711a87545b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.574855 kubelet[3363]: E0117 00:02:03.573073 3363 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.574855 kubelet[3363]: E0117 00:02:03.573127 3363 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-gjjfd" Jan 17 00:02:03.574855 kubelet[3363]: E0117 00:02:03.573151 3363 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-gjjfd" Jan 17 00:02:03.574983 kubelet[3363]: E0117 00:02:03.573230 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-gjjfd_calico-system(7edf5fdf-55d5-4ab4-bb24-67c10b2d9654)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-gjjfd_calico-system(7edf5fdf-55d5-4ab4-bb24-67c10b2d9654)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-gjjfd" podUID="7edf5fdf-55d5-4ab4-bb24-67c10b2d9654" Jan 17 00:02:03.575275 kubelet[3363]: E0117 00:02:03.575109 3363 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.575275 kubelet[3363]: E0117 00:02:03.575146 3363 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-854949db7b-nkqdw" Jan 17 00:02:03.575275 kubelet[3363]: E0117 00:02:03.575162 3363 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-854949db7b-nkqdw" Jan 17 00:02:03.575468 kubelet[3363]: E0117 00:02:03.575236 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-854949db7b-nkqdw_calico-system(d6662ed8-4409-4f39-bb3b-ba711a87545b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-854949db7b-nkqdw_calico-system(d6662ed8-4409-4f39-bb3b-ba711a87545b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-854949db7b-nkqdw" podUID="d6662ed8-4409-4f39-bb3b-ba711a87545b" Jan 17 00:02:03.584687 containerd[1868]: time="2026-01-17T00:02:03.584593598Z" level=error msg="Failed to destroy network for sandbox \"ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.584971 containerd[1868]: time="2026-01-17T00:02:03.584945238Z" level=error msg="encountered an error cleaning up failed sandbox \"ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.585127 containerd[1868]: time="2026-01-17T00:02:03.585047718Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pvlhk,Uid:fcd9ca75-edcd-4265-90e0-090e79b4eb07,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.585365 kubelet[3363]: E0117 00:02:03.585265 3363 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.585365 kubelet[3363]: E0117 00:02:03.585300 3363 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pvlhk" Jan 17 00:02:03.585365 kubelet[3363]: E0117 00:02:03.585319 3363 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pvlhk" Jan 17 00:02:03.585525 kubelet[3363]: E0117 00:02:03.585500 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pvlhk_kube-system(fcd9ca75-edcd-4265-90e0-090e79b4eb07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pvlhk_kube-system(fcd9ca75-edcd-4265-90e0-090e79b4eb07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pvlhk" podUID="fcd9ca75-edcd-4265-90e0-090e79b4eb07" Jan 17 00:02:03.625221 kubelet[3363]: I0117 00:02:03.624006 3363 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" Jan 17 00:02:03.625960 containerd[1868]: time="2026-01-17T00:02:03.625540352Z" level=info msg="StopPodSandbox for \"d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a\"" Jan 17 00:02:03.625960 containerd[1868]: time="2026-01-17T00:02:03.625725952Z" level=info msg="Ensure that sandbox d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a in task-service has been cleanup successfully" Jan 17 00:02:03.631928 containerd[1868]: time="2026-01-17T00:02:03.631898911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 17 00:02:03.635588 kubelet[3363]: I0117 00:02:03.635557 3363 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" Jan 17 00:02:03.648448 containerd[1868]: time="2026-01-17T00:02:03.648400948Z" level=info msg="StopPodSandbox for \"681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c\"" Jan 17 00:02:03.648694 containerd[1868]: time="2026-01-17T00:02:03.648670508Z" level=info msg="Ensure that sandbox 681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c in task-service has been cleanup successfully" Jan 17 00:02:03.649697 kubelet[3363]: I0117 00:02:03.649670 3363 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" Jan 17 00:02:03.650185 containerd[1868]: time="2026-01-17T00:02:03.650150708Z" level=info msg="StopPodSandbox for \"e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476\"" Jan 17 00:02:03.650323 containerd[1868]: time="2026-01-17T00:02:03.650303548Z" level=info msg="Ensure that sandbox e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476 in task-service has been cleanup successfully" Jan 17 00:02:03.652255 kubelet[3363]: I0117 00:02:03.652229 3363 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" Jan 17 00:02:03.655024 containerd[1868]: time="2026-01-17T00:02:03.654611507Z" level=info msg="StopPodSandbox for \"3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a\"" Jan 17 00:02:03.656073 kubelet[3363]: I0117 00:02:03.655555 3363 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" Jan 17 00:02:03.659325 containerd[1868]: time="2026-01-17T00:02:03.659297707Z" level=info msg="Ensure that sandbox 3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a in task-service has been cleanup successfully" Jan 17 00:02:03.662372 containerd[1868]: time="2026-01-17T00:02:03.662339066Z" level=info msg="StopPodSandbox for \"ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3\"" Jan 17 00:02:03.663982 containerd[1868]: time="2026-01-17T00:02:03.663949866Z" level=info msg="Ensure that sandbox ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3 in task-service has been cleanup successfully" Jan 17 00:02:03.677720 kubelet[3363]: I0117 00:02:03.677214 3363 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" Jan 17 00:02:03.679443 containerd[1868]: time="2026-01-17T00:02:03.679322464Z" level=info msg="StopPodSandbox for \"a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d\"" Jan 17 00:02:03.680009 containerd[1868]: time="2026-01-17T00:02:03.679974303Z" level=info msg="Ensure that sandbox a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d in task-service has been cleanup successfully" Jan 17 00:02:03.681778 kubelet[3363]: I0117 00:02:03.681741 3363 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" Jan 17 00:02:03.683059 containerd[1868]: time="2026-01-17T00:02:03.683026463Z" level=info msg="StopPodSandbox for \"00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e\"" Jan 17 00:02:03.684165 containerd[1868]: time="2026-01-17T00:02:03.684028543Z" level=info msg="Ensure that sandbox 00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e in task-service has been cleanup successfully" Jan 17 00:02:03.687372 containerd[1868]: time="2026-01-17T00:02:03.686965062Z" level=error msg="StopPodSandbox for \"d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a\" failed" error="failed to destroy network for sandbox \"d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.687447 kubelet[3363]: E0117 00:02:03.687136 3363 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" Jan 17 00:02:03.687447 kubelet[3363]: I0117 00:02:03.687214 3363 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" Jan 17 00:02:03.687585 kubelet[3363]: E0117 00:02:03.687538 3363 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a"} Jan 17 00:02:03.687813 kubelet[3363]: E0117 00:02:03.687666 3363 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"26750179-e403-4a5c-a534-2a8e795f3838\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:02:03.687813 kubelet[3363]: E0117 00:02:03.687691 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"26750179-e403-4a5c-a534-2a8e795f3838\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lpdqp" podUID="26750179-e403-4a5c-a534-2a8e795f3838" Jan 17 00:02:03.689137 containerd[1868]: time="2026-01-17T00:02:03.688979142Z" level=info msg="StopPodSandbox for \"fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0\"" Jan 17 00:02:03.689262 containerd[1868]: time="2026-01-17T00:02:03.689140662Z" level=info msg="Ensure that sandbox fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0 in task-service has been cleanup successfully" Jan 17 00:02:03.730003 containerd[1868]: time="2026-01-17T00:02:03.729788896Z" level=error msg="StopPodSandbox for \"3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a\" failed" error="failed to destroy network for sandbox \"3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.731240 kubelet[3363]: E0117 00:02:03.730024 3363 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" Jan 17 00:02:03.731240 kubelet[3363]: E0117 00:02:03.730080 3363 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a"} Jan 17 00:02:03.731240 kubelet[3363]: E0117 00:02:03.730113 3363 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6139bf28-5324-4c65-a1a9-809ea0e0b5cf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:02:03.731240 kubelet[3363]: E0117 00:02:03.730135 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6139bf28-5324-4c65-a1a9-809ea0e0b5cf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cb7f6dddc-pk2n6" podUID="6139bf28-5324-4c65-a1a9-809ea0e0b5cf" Jan 17 00:02:03.740233 containerd[1868]: time="2026-01-17T00:02:03.740132534Z" level=error msg="StopPodSandbox for \"e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476\" failed" error="failed to destroy network for sandbox \"e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.741055 kubelet[3363]: E0117 00:02:03.741018 3363 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" Jan 17 00:02:03.741144 kubelet[3363]: E0117 00:02:03.741063 3363 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476"} Jan 17 00:02:03.741144 kubelet[3363]: E0117 00:02:03.741096 3363 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e5fabfab-a45e-49bd-b3b5-28097628ac44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:02:03.741144 kubelet[3363]: E0117 00:02:03.741118 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e5fabfab-a45e-49bd-b3b5-28097628ac44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6zk6p" podUID="e5fabfab-a45e-49bd-b3b5-28097628ac44" Jan 17 00:02:03.741347 containerd[1868]: time="2026-01-17T00:02:03.740922734Z" level=error msg="StopPodSandbox for \"00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e\" failed" error="failed to destroy network for sandbox \"00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.744432 kubelet[3363]: E0117 00:02:03.744394 3363 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" Jan 17 00:02:03.744432 kubelet[3363]: E0117 00:02:03.744434 3363 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e"} Jan 17 00:02:03.744541 kubelet[3363]: E0117 00:02:03.744459 3363 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d6662ed8-4409-4f39-bb3b-ba711a87545b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:02:03.744541 kubelet[3363]: E0117 00:02:03.744478 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d6662ed8-4409-4f39-bb3b-ba711a87545b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-854949db7b-nkqdw" podUID="d6662ed8-4409-4f39-bb3b-ba711a87545b" Jan 17 00:02:03.758083 containerd[1868]: time="2026-01-17T00:02:03.757967571Z" level=error msg="StopPodSandbox for \"ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3\" failed" error="failed to destroy network for sandbox \"ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.758331 kubelet[3363]: E0117 00:02:03.758196 3363 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" Jan 17 00:02:03.758331 kubelet[3363]: E0117 00:02:03.758242 3363 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3"} Jan 17 00:02:03.758331 kubelet[3363]: E0117 00:02:03.758279 3363 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fcd9ca75-edcd-4265-90e0-090e79b4eb07\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:02:03.758331 kubelet[3363]: E0117 00:02:03.758300 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fcd9ca75-edcd-4265-90e0-090e79b4eb07\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pvlhk" podUID="fcd9ca75-edcd-4265-90e0-090e79b4eb07" Jan 17 00:02:03.763862 containerd[1868]: time="2026-01-17T00:02:03.762709491Z" level=error msg="StopPodSandbox for \"681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c\" failed" error="failed to destroy network for sandbox \"681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.763963 kubelet[3363]: E0117 00:02:03.763474 3363 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" Jan 17 00:02:03.763963 kubelet[3363]: E0117 00:02:03.763536 3363 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c"} Jan 17 00:02:03.763963 kubelet[3363]: E0117 00:02:03.763569 3363 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ecd1bf71-b900-4076-a4ef-6e1fdb271e0b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:02:03.763963 kubelet[3363]: E0117 00:02:03.763588 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ecd1bf71-b900-4076-a4ef-6e1fdb271e0b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5c86fff86c-nvh7r" podUID="ecd1bf71-b900-4076-a4ef-6e1fdb271e0b" Jan 17 00:02:03.766708 containerd[1868]: time="2026-01-17T00:02:03.766661290Z" level=error msg="StopPodSandbox for \"fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0\" failed" error="failed to destroy network for sandbox \"fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.766872 kubelet[3363]: E0117 00:02:03.766843 3363 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" Jan 17 00:02:03.766921 kubelet[3363]: E0117 00:02:03.766881 3363 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0"} Jan 17 00:02:03.766921 kubelet[3363]: E0117 00:02:03.766907 3363 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7e6f1f60-5b4d-4f6a-92be-ef48b02574bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:02:03.766998 kubelet[3363]: E0117 00:02:03.766928 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7e6f1f60-5b4d-4f6a-92be-ef48b02574bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cb7f6dddc-5gx8p" podUID="7e6f1f60-5b4d-4f6a-92be-ef48b02574bd" Jan 17 00:02:03.771389 containerd[1868]: time="2026-01-17T00:02:03.771285329Z" level=error msg="StopPodSandbox for \"a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d\" failed" error="failed to destroy network for sandbox \"a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:02:03.771512 kubelet[3363]: E0117 00:02:03.771472 3363 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" Jan 17 00:02:03.771512 kubelet[3363]: E0117 00:02:03.771507 3363 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d"} Jan 17 00:02:03.771594 kubelet[3363]: E0117 00:02:03.771535 3363 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7edf5fdf-55d5-4ab4-bb24-67c10b2d9654\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:02:03.771594 kubelet[3363]: E0117 00:02:03.771552 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7edf5fdf-55d5-4ab4-bb24-67c10b2d9654\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-gjjfd" podUID="7edf5fdf-55d5-4ab4-bb24-67c10b2d9654" Jan 17 00:02:04.190308 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0-shm.mount: Deactivated successfully. Jan 17 00:02:07.681503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2223488468.mount: Deactivated successfully. Jan 17 00:02:08.612549 systemd-resolved[1710]: Under memory pressure, flushing caches. Jan 17 00:02:08.617388 systemd-journald[1304]: Under memory pressure, flushing caches. Jan 17 00:02:08.612604 systemd-resolved[1710]: Flushed all caches. Jan 17 00:02:08.754489 containerd[1868]: time="2026-01-17T00:02:08.754441949Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:08.758434 containerd[1868]: time="2026-01-17T00:02:08.758376908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 17 00:02:08.763702 containerd[1868]: time="2026-01-17T00:02:08.763659988Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:08.769418 containerd[1868]: time="2026-01-17T00:02:08.769374347Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:08.770022 containerd[1868]: time="2026-01-17T00:02:08.769863667Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 5.137929476s" Jan 17 00:02:08.770022 containerd[1868]: time="2026-01-17T00:02:08.769895387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 17 00:02:08.784941 containerd[1868]: time="2026-01-17T00:02:08.784826784Z" level=info msg="CreateContainer within sandbox \"29504e96d9050dfb7f2d61e190672d6503603b05312d31e9631892a13e602df5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 00:02:08.825654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1331724680.mount: Deactivated successfully. Jan 17 00:02:08.846455 containerd[1868]: time="2026-01-17T00:02:08.846348454Z" level=info msg="CreateContainer within sandbox \"29504e96d9050dfb7f2d61e190672d6503603b05312d31e9631892a13e602df5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8f49fa13d107342d447ed10e9dd9112cef6e960cba5e6ad203e5130d48433a71\"" Jan 17 00:02:08.846890 containerd[1868]: time="2026-01-17T00:02:08.846871654Z" level=info msg="StartContainer for \"8f49fa13d107342d447ed10e9dd9112cef6e960cba5e6ad203e5130d48433a71\"" Jan 17 00:02:08.902399 containerd[1868]: time="2026-01-17T00:02:08.902069605Z" level=info msg="StartContainer for \"8f49fa13d107342d447ed10e9dd9112cef6e960cba5e6ad203e5130d48433a71\" returns successfully" Jan 17 00:02:09.227788 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 00:02:09.227968 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 00:02:09.339560 containerd[1868]: time="2026-01-17T00:02:09.339514332Z" level=info msg="StopPodSandbox for \"681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c\"" Jan 17 00:02:09.501241 containerd[1868]: 2026-01-17 00:02:09.446 [INFO][4594] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" Jan 17 00:02:09.501241 containerd[1868]: 2026-01-17 00:02:09.446 [INFO][4594] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" iface="eth0" netns="/var/run/netns/cni-cf3f29d4-a79a-c792-9573-ee1bacf6627a" Jan 17 00:02:09.501241 containerd[1868]: 2026-01-17 00:02:09.447 [INFO][4594] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" iface="eth0" netns="/var/run/netns/cni-cf3f29d4-a79a-c792-9573-ee1bacf6627a" Jan 17 00:02:09.501241 containerd[1868]: 2026-01-17 00:02:09.448 [INFO][4594] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" iface="eth0" netns="/var/run/netns/cni-cf3f29d4-a79a-c792-9573-ee1bacf6627a" Jan 17 00:02:09.501241 containerd[1868]: 2026-01-17 00:02:09.448 [INFO][4594] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" Jan 17 00:02:09.501241 containerd[1868]: 2026-01-17 00:02:09.448 [INFO][4594] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" Jan 17 00:02:09.501241 containerd[1868]: 2026-01-17 00:02:09.481 [INFO][4609] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" HandleID="k8s-pod-network.681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-whisker--5c86fff86c--nvh7r-eth0" Jan 17 00:02:09.501241 containerd[1868]: 2026-01-17 00:02:09.482 [INFO][4609] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:09.501241 containerd[1868]: 2026-01-17 00:02:09.482 [INFO][4609] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:09.501241 containerd[1868]: 2026-01-17 00:02:09.490 [WARNING][4609] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" HandleID="k8s-pod-network.681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-whisker--5c86fff86c--nvh7r-eth0" Jan 17 00:02:09.501241 containerd[1868]: 2026-01-17 00:02:09.490 [INFO][4609] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" HandleID="k8s-pod-network.681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-whisker--5c86fff86c--nvh7r-eth0" Jan 17 00:02:09.501241 containerd[1868]: 2026-01-17 00:02:09.495 [INFO][4609] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:09.501241 containerd[1868]: 2026-01-17 00:02:09.498 [INFO][4594] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" Jan 17 00:02:09.502670 containerd[1868]: time="2026-01-17T00:02:09.501817746Z" level=info msg="TearDown network for sandbox \"681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c\" successfully" Jan 17 00:02:09.502670 containerd[1868]: time="2026-01-17T00:02:09.501863946Z" level=info msg="StopPodSandbox for \"681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c\" returns successfully" Jan 17 00:02:09.579839 kubelet[3363]: I0117 00:02:09.579809 3363 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdrxs\" (UniqueName: \"kubernetes.io/projected/ecd1bf71-b900-4076-a4ef-6e1fdb271e0b-kube-api-access-mdrxs\") pod \"ecd1bf71-b900-4076-a4ef-6e1fdb271e0b\" (UID: \"ecd1bf71-b900-4076-a4ef-6e1fdb271e0b\") " Jan 17 00:02:09.580513 kubelet[3363]: I0117 00:02:09.579853 3363 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecd1bf71-b900-4076-a4ef-6e1fdb271e0b-whisker-ca-bundle\") pod \"ecd1bf71-b900-4076-a4ef-6e1fdb271e0b\" (UID: \"ecd1bf71-b900-4076-a4ef-6e1fdb271e0b\") " Jan 17 00:02:09.580513 kubelet[3363]: I0117 00:02:09.579886 3363 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ecd1bf71-b900-4076-a4ef-6e1fdb271e0b-whisker-backend-key-pair\") pod \"ecd1bf71-b900-4076-a4ef-6e1fdb271e0b\" (UID: \"ecd1bf71-b900-4076-a4ef-6e1fdb271e0b\") " Jan 17 00:02:09.582212 kubelet[3363]: I0117 00:02:09.581966 3363 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecd1bf71-b900-4076-a4ef-6e1fdb271e0b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ecd1bf71-b900-4076-a4ef-6e1fdb271e0b" (UID: "ecd1bf71-b900-4076-a4ef-6e1fdb271e0b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:02:09.582557 kubelet[3363]: I0117 00:02:09.582534 3363 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecd1bf71-b900-4076-a4ef-6e1fdb271e0b-kube-api-access-mdrxs" (OuterVolumeSpecName: "kube-api-access-mdrxs") pod "ecd1bf71-b900-4076-a4ef-6e1fdb271e0b" (UID: "ecd1bf71-b900-4076-a4ef-6e1fdb271e0b"). InnerVolumeSpecName "kube-api-access-mdrxs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:02:09.583840 kubelet[3363]: I0117 00:02:09.583812 3363 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecd1bf71-b900-4076-a4ef-6e1fdb271e0b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ecd1bf71-b900-4076-a4ef-6e1fdb271e0b" (UID: "ecd1bf71-b900-4076-a4ef-6e1fdb271e0b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:02:09.680906 kubelet[3363]: I0117 00:02:09.680807 3363 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mdrxs\" (UniqueName: \"kubernetes.io/projected/ecd1bf71-b900-4076-a4ef-6e1fdb271e0b-kube-api-access-mdrxs\") on node \"ci-4081.3.6-n-e1db9b2d97\" DevicePath \"\"" Jan 17 00:02:09.680906 kubelet[3363]: I0117 00:02:09.680837 3363 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecd1bf71-b900-4076-a4ef-6e1fdb271e0b-whisker-ca-bundle\") on node \"ci-4081.3.6-n-e1db9b2d97\" DevicePath \"\"" Jan 17 00:02:09.680906 kubelet[3363]: I0117 00:02:09.680848 3363 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ecd1bf71-b900-4076-a4ef-6e1fdb271e0b-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-e1db9b2d97\" DevicePath \"\"" Jan 17 00:02:09.777451 systemd[1]: run-netns-cni\x2dcf3f29d4\x2da79a\x2dc792\x2d9573\x2dee1bacf6627a.mount: Deactivated successfully. Jan 17 00:02:09.777582 systemd[1]: var-lib-kubelet-pods-ecd1bf71\x2db900\x2d4076\x2da4ef\x2d6e1fdb271e0b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmdrxs.mount: Deactivated successfully. Jan 17 00:02:09.777673 systemd[1]: var-lib-kubelet-pods-ecd1bf71\x2db900\x2d4076\x2da4ef\x2d6e1fdb271e0b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 17 00:02:09.783190 kubelet[3363]: I0117 00:02:09.781900 3363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-d6jzl" podStartSLOduration=1.629839589 podStartE2EDuration="16.781881259s" podCreationTimestamp="2026-01-17 00:01:53 +0000 UTC" firstStartedPulling="2026-01-17 00:01:53.618768276 +0000 UTC m=+24.224073510" lastFinishedPulling="2026-01-17 00:02:08.770809946 +0000 UTC m=+39.376115180" observedRunningTime="2026-01-17 00:02:09.780768019 +0000 UTC m=+40.386073293" watchObservedRunningTime="2026-01-17 00:02:09.781881259 +0000 UTC m=+40.387186493" Jan 17 00:02:09.882275 kubelet[3363]: I0117 00:02:09.882242 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8e344f5-3821-4ec4-a6da-be956667501d-whisker-ca-bundle\") pod \"whisker-8f9bbb7c5-bvnkf\" (UID: \"a8e344f5-3821-4ec4-a6da-be956667501d\") " pod="calico-system/whisker-8f9bbb7c5-bvnkf" Jan 17 00:02:09.882483 kubelet[3363]: I0117 00:02:09.882440 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skdtr\" (UniqueName: \"kubernetes.io/projected/a8e344f5-3821-4ec4-a6da-be956667501d-kube-api-access-skdtr\") pod \"whisker-8f9bbb7c5-bvnkf\" (UID: \"a8e344f5-3821-4ec4-a6da-be956667501d\") " pod="calico-system/whisker-8f9bbb7c5-bvnkf" Jan 17 00:02:09.882483 kubelet[3363]: I0117 00:02:09.882470 3363 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a8e344f5-3821-4ec4-a6da-be956667501d-whisker-backend-key-pair\") pod \"whisker-8f9bbb7c5-bvnkf\" (UID: \"a8e344f5-3821-4ec4-a6da-be956667501d\") " pod="calico-system/whisker-8f9bbb7c5-bvnkf" Jan 17 00:02:10.112304 containerd[1868]: time="2026-01-17T00:02:10.112261245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8f9bbb7c5-bvnkf,Uid:a8e344f5-3821-4ec4-a6da-be956667501d,Namespace:calico-system,Attempt:0,}" Jan 17 00:02:10.296388 systemd-networkd[1412]: cali121e89cd378: Link UP Jan 17 00:02:10.296525 systemd-networkd[1412]: cali121e89cd378: Gained carrier Jan 17 00:02:10.314906 containerd[1868]: 2026-01-17 00:02:10.175 [INFO][4652] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 00:02:10.314906 containerd[1868]: 2026-01-17 00:02:10.197 [INFO][4652] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--e1db9b2d97-k8s-whisker--8f9bbb7c5--bvnkf-eth0 whisker-8f9bbb7c5- calico-system a8e344f5-3821-4ec4-a6da-be956667501d 875 0 2026-01-17 00:02:09 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8f9bbb7c5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-e1db9b2d97 whisker-8f9bbb7c5-bvnkf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali121e89cd378 [] [] }} ContainerID="c4ef80f075618a5f3e25059bb3ff788245f140dda043e05fd9c738e5c523a830" Namespace="calico-system" Pod="whisker-8f9bbb7c5-bvnkf" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-whisker--8f9bbb7c5--bvnkf-" Jan 17 00:02:10.314906 containerd[1868]: 2026-01-17 00:02:10.197 [INFO][4652] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c4ef80f075618a5f3e25059bb3ff788245f140dda043e05fd9c738e5c523a830" Namespace="calico-system" Pod="whisker-8f9bbb7c5-bvnkf" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-whisker--8f9bbb7c5--bvnkf-eth0" Jan 17 00:02:10.314906 containerd[1868]: 2026-01-17 00:02:10.219 [INFO][4664] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4ef80f075618a5f3e25059bb3ff788245f140dda043e05fd9c738e5c523a830" HandleID="k8s-pod-network.c4ef80f075618a5f3e25059bb3ff788245f140dda043e05fd9c738e5c523a830" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-whisker--8f9bbb7c5--bvnkf-eth0" Jan 17 00:02:10.314906 containerd[1868]: 2026-01-17 00:02:10.219 [INFO][4664] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c4ef80f075618a5f3e25059bb3ff788245f140dda043e05fd9c738e5c523a830" HandleID="k8s-pod-network.c4ef80f075618a5f3e25059bb3ff788245f140dda043e05fd9c738e5c523a830" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-whisker--8f9bbb7c5--bvnkf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024aff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-e1db9b2d97", "pod":"whisker-8f9bbb7c5-bvnkf", "timestamp":"2026-01-17 00:02:10.219681067 +0000 UTC"}, Hostname:"ci-4081.3.6-n-e1db9b2d97", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:02:10.314906 containerd[1868]: 2026-01-17 00:02:10.219 [INFO][4664] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:10.314906 containerd[1868]: 2026-01-17 00:02:10.219 [INFO][4664] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:10.314906 containerd[1868]: 2026-01-17 00:02:10.219 [INFO][4664] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-e1db9b2d97' Jan 17 00:02:10.314906 containerd[1868]: 2026-01-17 00:02:10.228 [INFO][4664] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c4ef80f075618a5f3e25059bb3ff788245f140dda043e05fd9c738e5c523a830" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:10.314906 containerd[1868]: 2026-01-17 00:02:10.231 [INFO][4664] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:10.314906 containerd[1868]: 2026-01-17 00:02:10.234 [INFO][4664] ipam/ipam.go 511: Trying affinity for 192.168.47.192/26 host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:10.314906 containerd[1868]: 2026-01-17 00:02:10.236 [INFO][4664] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.192/26 host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:10.314906 containerd[1868]: 2026-01-17 00:02:10.238 [INFO][4664] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.192/26 host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:10.314906 containerd[1868]: 2026-01-17 00:02:10.238 [INFO][4664] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.47.192/26 handle="k8s-pod-network.c4ef80f075618a5f3e25059bb3ff788245f140dda043e05fd9c738e5c523a830" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:10.314906 containerd[1868]: 2026-01-17 00:02:10.239 [INFO][4664] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c4ef80f075618a5f3e25059bb3ff788245f140dda043e05fd9c738e5c523a830 Jan 17 00:02:10.314906 containerd[1868]: 2026-01-17 00:02:10.243 [INFO][4664] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.47.192/26 handle="k8s-pod-network.c4ef80f075618a5f3e25059bb3ff788245f140dda043e05fd9c738e5c523a830" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:10.314906 containerd[1868]: 2026-01-17 00:02:10.251 [INFO][4664] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.47.193/26] block=192.168.47.192/26 handle="k8s-pod-network.c4ef80f075618a5f3e25059bb3ff788245f140dda043e05fd9c738e5c523a830" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:10.314906 containerd[1868]: 2026-01-17 00:02:10.251 [INFO][4664] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.193/26] handle="k8s-pod-network.c4ef80f075618a5f3e25059bb3ff788245f140dda043e05fd9c738e5c523a830" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:10.314906 containerd[1868]: 2026-01-17 00:02:10.252 [INFO][4664] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:10.314906 containerd[1868]: 2026-01-17 00:02:10.252 [INFO][4664] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.47.193/26] IPv6=[] ContainerID="c4ef80f075618a5f3e25059bb3ff788245f140dda043e05fd9c738e5c523a830" HandleID="k8s-pod-network.c4ef80f075618a5f3e25059bb3ff788245f140dda043e05fd9c738e5c523a830" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-whisker--8f9bbb7c5--bvnkf-eth0" Jan 17 00:02:10.315521 containerd[1868]: 2026-01-17 00:02:10.254 [INFO][4652] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c4ef80f075618a5f3e25059bb3ff788245f140dda043e05fd9c738e5c523a830" Namespace="calico-system" Pod="whisker-8f9bbb7c5-bvnkf" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-whisker--8f9bbb7c5--bvnkf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e1db9b2d97-k8s-whisker--8f9bbb7c5--bvnkf-eth0", GenerateName:"whisker-8f9bbb7c5-", Namespace:"calico-system", SelfLink:"", UID:"a8e344f5-3821-4ec4-a6da-be956667501d", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8f9bbb7c5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e1db9b2d97", ContainerID:"", Pod:"whisker-8f9bbb7c5-bvnkf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.47.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali121e89cd378", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:10.315521 containerd[1868]: 2026-01-17 00:02:10.254 [INFO][4652] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.193/32] ContainerID="c4ef80f075618a5f3e25059bb3ff788245f140dda043e05fd9c738e5c523a830" Namespace="calico-system" Pod="whisker-8f9bbb7c5-bvnkf" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-whisker--8f9bbb7c5--bvnkf-eth0" Jan 17 00:02:10.315521 containerd[1868]: 2026-01-17 00:02:10.254 [INFO][4652] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali121e89cd378 ContainerID="c4ef80f075618a5f3e25059bb3ff788245f140dda043e05fd9c738e5c523a830" Namespace="calico-system" Pod="whisker-8f9bbb7c5-bvnkf" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-whisker--8f9bbb7c5--bvnkf-eth0" Jan 17 00:02:10.315521 containerd[1868]: 2026-01-17 00:02:10.294 [INFO][4652] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c4ef80f075618a5f3e25059bb3ff788245f140dda043e05fd9c738e5c523a830" Namespace="calico-system" Pod="whisker-8f9bbb7c5-bvnkf" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-whisker--8f9bbb7c5--bvnkf-eth0" Jan 17 00:02:10.315521 containerd[1868]: 2026-01-17 00:02:10.297 [INFO][4652] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c4ef80f075618a5f3e25059bb3ff788245f140dda043e05fd9c738e5c523a830" Namespace="calico-system" Pod="whisker-8f9bbb7c5-bvnkf" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-whisker--8f9bbb7c5--bvnkf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e1db9b2d97-k8s-whisker--8f9bbb7c5--bvnkf-eth0", GenerateName:"whisker-8f9bbb7c5-", Namespace:"calico-system", SelfLink:"", UID:"a8e344f5-3821-4ec4-a6da-be956667501d", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8f9bbb7c5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e1db9b2d97", ContainerID:"c4ef80f075618a5f3e25059bb3ff788245f140dda043e05fd9c738e5c523a830", Pod:"whisker-8f9bbb7c5-bvnkf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.47.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali121e89cd378", MAC:"d2:e3:8c:8a:99:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:10.315521 containerd[1868]: 2026-01-17 00:02:10.312 [INFO][4652] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c4ef80f075618a5f3e25059bb3ff788245f140dda043e05fd9c738e5c523a830" Namespace="calico-system" Pod="whisker-8f9bbb7c5-bvnkf" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-whisker--8f9bbb7c5--bvnkf-eth0" Jan 17 00:02:10.350031 containerd[1868]: time="2026-01-17T00:02:10.349570165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:10.350031 containerd[1868]: time="2026-01-17T00:02:10.349995605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:10.350322 containerd[1868]: time="2026-01-17T00:02:10.350019965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:10.350478 containerd[1868]: time="2026-01-17T00:02:10.350360045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:10.388217 containerd[1868]: time="2026-01-17T00:02:10.388183119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8f9bbb7c5-bvnkf,Uid:a8e344f5-3821-4ec4-a6da-be956667501d,Namespace:calico-system,Attempt:0,} returns sandbox id \"c4ef80f075618a5f3e25059bb3ff788245f140dda043e05fd9c738e5c523a830\"" Jan 17 00:02:10.389487 containerd[1868]: time="2026-01-17T00:02:10.389460399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:02:10.665664 containerd[1868]: time="2026-01-17T00:02:10.664194873Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:02:10.670022 containerd[1868]: time="2026-01-17T00:02:10.669920832Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:02:10.670022 containerd[1868]: time="2026-01-17T00:02:10.669988512Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:02:10.670144 kubelet[3363]: E0117 00:02:10.670107 3363 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:02:10.670441 kubelet[3363]: E0117 00:02:10.670154 3363 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:02:10.670472 kubelet[3363]: E0117 00:02:10.670291 3363 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2f3b6028e4fe4b7a9477513ac0bfee1b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-skdtr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8f9bbb7c5-bvnkf_calico-system(a8e344f5-3821-4ec4-a6da-be956667501d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:02:10.672235 containerd[1868]: time="2026-01-17T00:02:10.672201792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:02:10.967681 containerd[1868]: time="2026-01-17T00:02:10.967166703Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:02:10.970191 kernel: bpftool[4844]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 00:02:11.059578 containerd[1868]: time="2026-01-17T00:02:11.059530328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:02:11.059731 containerd[1868]: time="2026-01-17T00:02:11.059687968Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:02:11.060488 kubelet[3363]: E0117 00:02:11.060084 3363 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:02:11.060488 kubelet[3363]: E0117 00:02:11.060137 3363 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:02:11.060488 kubelet[3363]: E0117 00:02:11.060270 3363 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-skdtr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8f9bbb7c5-bvnkf_calico-system(a8e344f5-3821-4ec4-a6da-be956667501d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:02:11.061963 kubelet[3363]: E0117 00:02:11.061766 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8f9bbb7c5-bvnkf" podUID="a8e344f5-3821-4ec4-a6da-be956667501d" Jan 17 00:02:11.176549 systemd-networkd[1412]: vxlan.calico: Link UP Jan 17 00:02:11.176555 systemd-networkd[1412]: vxlan.calico: Gained carrier Jan 17 00:02:11.506919 kubelet[3363]: I0117 00:02:11.506744 3363 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecd1bf71-b900-4076-a4ef-6e1fdb271e0b" path="/var/lib/kubelet/pods/ecd1bf71-b900-4076-a4ef-6e1fdb271e0b/volumes" Jan 17 00:02:11.684408 systemd-networkd[1412]: cali121e89cd378: Gained IPv6LL Jan 17 00:02:11.720029 kubelet[3363]: E0117 00:02:11.719987 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8f9bbb7c5-bvnkf" podUID="a8e344f5-3821-4ec4-a6da-be956667501d" Jan 17 00:02:12.580842 systemd-networkd[1412]: vxlan.calico: Gained IPv6LL Jan 17 00:02:15.504010 containerd[1868]: time="2026-01-17T00:02:15.503976451Z" level=info msg="StopPodSandbox for \"a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d\"" Jan 17 00:02:15.577825 containerd[1868]: 2026-01-17 00:02:15.545 [INFO][4944] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" Jan 17 00:02:15.577825 containerd[1868]: 2026-01-17 00:02:15.547 [INFO][4944] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" iface="eth0" netns="/var/run/netns/cni-5d96a7ee-b60e-24ce-1e2b-3c533de64745" Jan 17 00:02:15.577825 containerd[1868]: 2026-01-17 00:02:15.548 [INFO][4944] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" iface="eth0" netns="/var/run/netns/cni-5d96a7ee-b60e-24ce-1e2b-3c533de64745" Jan 17 00:02:15.577825 containerd[1868]: 2026-01-17 00:02:15.548 [INFO][4944] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" iface="eth0" netns="/var/run/netns/cni-5d96a7ee-b60e-24ce-1e2b-3c533de64745" Jan 17 00:02:15.577825 containerd[1868]: 2026-01-17 00:02:15.548 [INFO][4944] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" Jan 17 00:02:15.577825 containerd[1868]: 2026-01-17 00:02:15.548 [INFO][4944] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" Jan 17 00:02:15.577825 containerd[1868]: 2026-01-17 00:02:15.565 [INFO][4952] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" HandleID="k8s-pod-network.a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-goldmane--666569f655--gjjfd-eth0" Jan 17 00:02:15.577825 containerd[1868]: 2026-01-17 00:02:15.565 [INFO][4952] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:15.577825 containerd[1868]: 2026-01-17 00:02:15.565 [INFO][4952] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:15.577825 containerd[1868]: 2026-01-17 00:02:15.573 [WARNING][4952] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" HandleID="k8s-pod-network.a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-goldmane--666569f655--gjjfd-eth0" Jan 17 00:02:15.577825 containerd[1868]: 2026-01-17 00:02:15.573 [INFO][4952] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" HandleID="k8s-pod-network.a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-goldmane--666569f655--gjjfd-eth0" Jan 17 00:02:15.577825 containerd[1868]: 2026-01-17 00:02:15.574 [INFO][4952] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:15.577825 containerd[1868]: 2026-01-17 00:02:15.576 [INFO][4944] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" Jan 17 00:02:15.578500 containerd[1868]: time="2026-01-17T00:02:15.577935710Z" level=info msg="TearDown network for sandbox \"a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d\" successfully" Jan 17 00:02:15.578500 containerd[1868]: time="2026-01-17T00:02:15.577962230Z" level=info msg="StopPodSandbox for \"a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d\" returns successfully" Jan 17 00:02:15.580225 containerd[1868]: time="2026-01-17T00:02:15.579000550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-gjjfd,Uid:7edf5fdf-55d5-4ab4-bb24-67c10b2d9654,Namespace:calico-system,Attempt:1,}" Jan 17 00:02:15.581826 systemd[1]: run-netns-cni\x2d5d96a7ee\x2db60e\x2d24ce\x2d1e2b\x2d3c533de64745.mount: Deactivated successfully. Jan 17 00:02:15.722099 systemd-networkd[1412]: cali72035ebe7de: Link UP Jan 17 00:02:15.722286 systemd-networkd[1412]: cali72035ebe7de: Gained carrier Jan 17 00:02:15.739482 containerd[1868]: 2026-01-17 00:02:15.656 [INFO][4958] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--e1db9b2d97-k8s-goldmane--666569f655--gjjfd-eth0 goldmane-666569f655- calico-system 7edf5fdf-55d5-4ab4-bb24-67c10b2d9654 908 0 2026-01-17 00:01:50 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-e1db9b2d97 goldmane-666569f655-gjjfd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali72035ebe7de [] [] }} ContainerID="5bc2f729b56208847499a886795fc8bf1ba6262188d2fa3e45692a8fd1533f9e" Namespace="calico-system" Pod="goldmane-666569f655-gjjfd" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-goldmane--666569f655--gjjfd-" Jan 17 00:02:15.739482 containerd[1868]: 2026-01-17 00:02:15.656 [INFO][4958] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5bc2f729b56208847499a886795fc8bf1ba6262188d2fa3e45692a8fd1533f9e" Namespace="calico-system" Pod="goldmane-666569f655-gjjfd" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-goldmane--666569f655--gjjfd-eth0" Jan 17 00:02:15.739482 containerd[1868]: 2026-01-17 00:02:15.680 [INFO][4970] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5bc2f729b56208847499a886795fc8bf1ba6262188d2fa3e45692a8fd1533f9e" HandleID="k8s-pod-network.5bc2f729b56208847499a886795fc8bf1ba6262188d2fa3e45692a8fd1533f9e" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-goldmane--666569f655--gjjfd-eth0" Jan 17 00:02:15.739482 containerd[1868]: 2026-01-17 00:02:15.680 [INFO][4970] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5bc2f729b56208847499a886795fc8bf1ba6262188d2fa3e45692a8fd1533f9e" HandleID="k8s-pod-network.5bc2f729b56208847499a886795fc8bf1ba6262188d2fa3e45692a8fd1533f9e" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-goldmane--666569f655--gjjfd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b050), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-e1db9b2d97", "pod":"goldmane-666569f655-gjjfd", "timestamp":"2026-01-17 00:02:15.680277441 +0000 UTC"}, Hostname:"ci-4081.3.6-n-e1db9b2d97", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:02:15.739482 containerd[1868]: 2026-01-17 00:02:15.680 [INFO][4970] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:15.739482 containerd[1868]: 2026-01-17 00:02:15.680 [INFO][4970] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:15.739482 containerd[1868]: 2026-01-17 00:02:15.680 [INFO][4970] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-e1db9b2d97' Jan 17 00:02:15.739482 containerd[1868]: 2026-01-17 00:02:15.688 [INFO][4970] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5bc2f729b56208847499a886795fc8bf1ba6262188d2fa3e45692a8fd1533f9e" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:15.739482 containerd[1868]: 2026-01-17 00:02:15.692 [INFO][4970] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:15.739482 containerd[1868]: 2026-01-17 00:02:15.695 [INFO][4970] ipam/ipam.go 511: Trying affinity for 192.168.47.192/26 host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:15.739482 containerd[1868]: 2026-01-17 00:02:15.697 [INFO][4970] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.192/26 host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:15.739482 containerd[1868]: 2026-01-17 00:02:15.698 [INFO][4970] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.192/26 host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:15.739482 containerd[1868]: 2026-01-17 00:02:15.698 [INFO][4970] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.47.192/26 handle="k8s-pod-network.5bc2f729b56208847499a886795fc8bf1ba6262188d2fa3e45692a8fd1533f9e" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:15.739482 containerd[1868]: 2026-01-17 00:02:15.701 [INFO][4970] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5bc2f729b56208847499a886795fc8bf1ba6262188d2fa3e45692a8fd1533f9e Jan 17 00:02:15.739482 containerd[1868]: 2026-01-17 00:02:15.708 [INFO][4970] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.47.192/26 handle="k8s-pod-network.5bc2f729b56208847499a886795fc8bf1ba6262188d2fa3e45692a8fd1533f9e" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:15.739482 containerd[1868]: 2026-01-17 00:02:15.713 [INFO][4970] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.47.194/26] block=192.168.47.192/26 handle="k8s-pod-network.5bc2f729b56208847499a886795fc8bf1ba6262188d2fa3e45692a8fd1533f9e" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:15.739482 containerd[1868]: 2026-01-17 00:02:15.713 [INFO][4970] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.194/26] handle="k8s-pod-network.5bc2f729b56208847499a886795fc8bf1ba6262188d2fa3e45692a8fd1533f9e" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:15.739482 containerd[1868]: 2026-01-17 00:02:15.713 [INFO][4970] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:15.739482 containerd[1868]: 2026-01-17 00:02:15.714 [INFO][4970] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.47.194/26] IPv6=[] ContainerID="5bc2f729b56208847499a886795fc8bf1ba6262188d2fa3e45692a8fd1533f9e" HandleID="k8s-pod-network.5bc2f729b56208847499a886795fc8bf1ba6262188d2fa3e45692a8fd1533f9e" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-goldmane--666569f655--gjjfd-eth0" Jan 17 00:02:15.740035 containerd[1868]: 2026-01-17 00:02:15.716 [INFO][4958] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5bc2f729b56208847499a886795fc8bf1ba6262188d2fa3e45692a8fd1533f9e" Namespace="calico-system" Pod="goldmane-666569f655-gjjfd" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-goldmane--666569f655--gjjfd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e1db9b2d97-k8s-goldmane--666569f655--gjjfd-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7edf5fdf-55d5-4ab4-bb24-67c10b2d9654", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 1, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e1db9b2d97", ContainerID:"", Pod:"goldmane-666569f655-gjjfd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.47.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali72035ebe7de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:15.740035 containerd[1868]: 2026-01-17 00:02:15.716 [INFO][4958] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.194/32] ContainerID="5bc2f729b56208847499a886795fc8bf1ba6262188d2fa3e45692a8fd1533f9e" Namespace="calico-system" Pod="goldmane-666569f655-gjjfd" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-goldmane--666569f655--gjjfd-eth0" Jan 17 00:02:15.740035 containerd[1868]: 2026-01-17 00:02:15.716 [INFO][4958] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali72035ebe7de ContainerID="5bc2f729b56208847499a886795fc8bf1ba6262188d2fa3e45692a8fd1533f9e" Namespace="calico-system" Pod="goldmane-666569f655-gjjfd" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-goldmane--666569f655--gjjfd-eth0" Jan 17 00:02:15.740035 containerd[1868]: 2026-01-17 00:02:15.722 [INFO][4958] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5bc2f729b56208847499a886795fc8bf1ba6262188d2fa3e45692a8fd1533f9e" Namespace="calico-system" Pod="goldmane-666569f655-gjjfd" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-goldmane--666569f655--gjjfd-eth0" Jan 17 00:02:15.740035 containerd[1868]: 2026-01-17 00:02:15.723 [INFO][4958] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5bc2f729b56208847499a886795fc8bf1ba6262188d2fa3e45692a8fd1533f9e" Namespace="calico-system" Pod="goldmane-666569f655-gjjfd" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-goldmane--666569f655--gjjfd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e1db9b2d97-k8s-goldmane--666569f655--gjjfd-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7edf5fdf-55d5-4ab4-bb24-67c10b2d9654", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 1, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e1db9b2d97", ContainerID:"5bc2f729b56208847499a886795fc8bf1ba6262188d2fa3e45692a8fd1533f9e", Pod:"goldmane-666569f655-gjjfd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.47.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali72035ebe7de", MAC:"62:aa:97:dd:19:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:15.740035 containerd[1868]: 2026-01-17 00:02:15.736 [INFO][4958] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5bc2f729b56208847499a886795fc8bf1ba6262188d2fa3e45692a8fd1533f9e" Namespace="calico-system" Pod="goldmane-666569f655-gjjfd" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-goldmane--666569f655--gjjfd-eth0" Jan 17 00:02:15.760887 containerd[1868]: time="2026-01-17T00:02:15.760735979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:15.760887 containerd[1868]: time="2026-01-17T00:02:15.760790059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:15.760887 containerd[1868]: time="2026-01-17T00:02:15.760805179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:15.761657 containerd[1868]: time="2026-01-17T00:02:15.761562458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:15.802817 containerd[1868]: time="2026-01-17T00:02:15.802771047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-gjjfd,Uid:7edf5fdf-55d5-4ab4-bb24-67c10b2d9654,Namespace:calico-system,Attempt:1,} returns sandbox id \"5bc2f729b56208847499a886795fc8bf1ba6262188d2fa3e45692a8fd1533f9e\"" Jan 17 00:02:15.805295 containerd[1868]: time="2026-01-17T00:02:15.804144607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:02:16.064949 containerd[1868]: time="2026-01-17T00:02:16.064906533Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:02:16.070113 containerd[1868]: time="2026-01-17T00:02:16.070075292Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:02:16.070204 containerd[1868]: time="2026-01-17T00:02:16.070184892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:02:16.070529 kubelet[3363]: E0117 00:02:16.070313 3363 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:02:16.070529 kubelet[3363]: E0117 00:02:16.070358 3363 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:02:16.076528 kubelet[3363]: E0117 00:02:16.076465 3363 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ph8gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-gjjfd_calico-system(7edf5fdf-55d5-4ab4-bb24-67c10b2d9654): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:02:16.077686 kubelet[3363]: E0117 00:02:16.077643 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gjjfd" podUID="7edf5fdf-55d5-4ab4-bb24-67c10b2d9654" Jan 17 00:02:16.504441 containerd[1868]: time="2026-01-17T00:02:16.503411050Z" level=info msg="StopPodSandbox for \"e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476\"" Jan 17 00:02:16.504441 containerd[1868]: time="2026-01-17T00:02:16.503411170Z" level=info msg="StopPodSandbox for \"3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a\"" Jan 17 00:02:16.506325 containerd[1868]: time="2026-01-17T00:02:16.503427690Z" level=info msg="StopPodSandbox for \"00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e\"" Jan 17 00:02:16.632438 containerd[1868]: 2026-01-17 00:02:16.578 [INFO][5058] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" Jan 17 00:02:16.632438 containerd[1868]: 2026-01-17 00:02:16.578 [INFO][5058] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" iface="eth0" netns="/var/run/netns/cni-1c1b279b-59a1-d3de-09d4-30c42fb0ee9f" Jan 17 00:02:16.632438 containerd[1868]: 2026-01-17 00:02:16.578 [INFO][5058] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" iface="eth0" netns="/var/run/netns/cni-1c1b279b-59a1-d3de-09d4-30c42fb0ee9f" Jan 17 00:02:16.632438 containerd[1868]: 2026-01-17 00:02:16.579 [INFO][5058] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" iface="eth0" netns="/var/run/netns/cni-1c1b279b-59a1-d3de-09d4-30c42fb0ee9f" Jan 17 00:02:16.632438 containerd[1868]: 2026-01-17 00:02:16.579 [INFO][5058] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" Jan 17 00:02:16.632438 containerd[1868]: 2026-01-17 00:02:16.579 [INFO][5058] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" Jan 17 00:02:16.632438 containerd[1868]: 2026-01-17 00:02:16.612 [INFO][5081] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" HandleID="k8s-pod-network.e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-csi--node--driver--6zk6p-eth0" Jan 17 00:02:16.632438 containerd[1868]: 2026-01-17 00:02:16.613 [INFO][5081] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:16.632438 containerd[1868]: 2026-01-17 00:02:16.613 [INFO][5081] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:16.632438 containerd[1868]: 2026-01-17 00:02:16.623 [WARNING][5081] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" HandleID="k8s-pod-network.e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-csi--node--driver--6zk6p-eth0" Jan 17 00:02:16.632438 containerd[1868]: 2026-01-17 00:02:16.623 [INFO][5081] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" HandleID="k8s-pod-network.e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-csi--node--driver--6zk6p-eth0" Jan 17 00:02:16.632438 containerd[1868]: 2026-01-17 00:02:16.626 [INFO][5081] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:16.632438 containerd[1868]: 2026-01-17 00:02:16.629 [INFO][5058] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" Jan 17 00:02:16.634341 containerd[1868]: time="2026-01-17T00:02:16.633326054Z" level=info msg="TearDown network for sandbox \"e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476\" successfully" Jan 17 00:02:16.634341 containerd[1868]: time="2026-01-17T00:02:16.633396254Z" level=info msg="StopPodSandbox for \"e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476\" returns successfully" Jan 17 00:02:16.635884 systemd[1]: run-netns-cni\x2d1c1b279b\x2d59a1\x2dd3de\x2d09d4\x2d30c42fb0ee9f.mount: Deactivated successfully. Jan 17 00:02:16.637400 containerd[1868]: time="2026-01-17T00:02:16.636303773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6zk6p,Uid:e5fabfab-a45e-49bd-b3b5-28097628ac44,Namespace:calico-system,Attempt:1,}" Jan 17 00:02:16.656882 containerd[1868]: 2026-01-17 00:02:16.601 [INFO][5057] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" Jan 17 00:02:16.656882 containerd[1868]: 2026-01-17 00:02:16.601 [INFO][5057] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" iface="eth0" netns="/var/run/netns/cni-912ca4f4-5f7c-43f5-82f0-339106d7751c" Jan 17 00:02:16.656882 containerd[1868]: 2026-01-17 00:02:16.602 [INFO][5057] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" iface="eth0" netns="/var/run/netns/cni-912ca4f4-5f7c-43f5-82f0-339106d7751c" Jan 17 00:02:16.656882 containerd[1868]: 2026-01-17 00:02:16.602 [INFO][5057] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" iface="eth0" netns="/var/run/netns/cni-912ca4f4-5f7c-43f5-82f0-339106d7751c" Jan 17 00:02:16.656882 containerd[1868]: 2026-01-17 00:02:16.603 [INFO][5057] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" Jan 17 00:02:16.656882 containerd[1868]: 2026-01-17 00:02:16.603 [INFO][5057] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" Jan 17 00:02:16.656882 containerd[1868]: 2026-01-17 00:02:16.642 [INFO][5087] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" HandleID="k8s-pod-network.3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--pk2n6-eth0" Jan 17 00:02:16.656882 containerd[1868]: 2026-01-17 00:02:16.642 [INFO][5087] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:16.656882 containerd[1868]: 2026-01-17 00:02:16.642 [INFO][5087] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:16.656882 containerd[1868]: 2026-01-17 00:02:16.650 [WARNING][5087] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" HandleID="k8s-pod-network.3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--pk2n6-eth0" Jan 17 00:02:16.656882 containerd[1868]: 2026-01-17 00:02:16.650 [INFO][5087] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" HandleID="k8s-pod-network.3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--pk2n6-eth0" Jan 17 00:02:16.656882 containerd[1868]: 2026-01-17 00:02:16.651 [INFO][5087] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:16.656882 containerd[1868]: 2026-01-17 00:02:16.653 [INFO][5057] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" Jan 17 00:02:16.661436 containerd[1868]: time="2026-01-17T00:02:16.657004527Z" level=info msg="TearDown network for sandbox \"3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a\" successfully" Jan 17 00:02:16.661436 containerd[1868]: time="2026-01-17T00:02:16.657027327Z" level=info msg="StopPodSandbox for \"3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a\" returns successfully" Jan 17 00:02:16.659732 systemd[1]: run-netns-cni\x2d912ca4f4\x2d5f7c\x2d43f5\x2d82f0\x2d339106d7751c.mount: Deactivated successfully. Jan 17 00:02:16.663282 containerd[1868]: time="2026-01-17T00:02:16.663208125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cb7f6dddc-pk2n6,Uid:6139bf28-5324-4c65-a1a9-809ea0e0b5cf,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:02:16.669490 containerd[1868]: 2026-01-17 00:02:16.607 [INFO][5056] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" Jan 17 00:02:16.669490 containerd[1868]: 2026-01-17 00:02:16.607 [INFO][5056] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" iface="eth0" netns="/var/run/netns/cni-6cd71592-8e75-8d67-f34b-380d6e47ac85" Jan 17 00:02:16.669490 containerd[1868]: 2026-01-17 00:02:16.607 [INFO][5056] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" iface="eth0" netns="/var/run/netns/cni-6cd71592-8e75-8d67-f34b-380d6e47ac85" Jan 17 00:02:16.669490 containerd[1868]: 2026-01-17 00:02:16.607 [INFO][5056] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" iface="eth0" netns="/var/run/netns/cni-6cd71592-8e75-8d67-f34b-380d6e47ac85" Jan 17 00:02:16.669490 containerd[1868]: 2026-01-17 00:02:16.607 [INFO][5056] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" Jan 17 00:02:16.669490 containerd[1868]: 2026-01-17 00:02:16.607 [INFO][5056] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" Jan 17 00:02:16.669490 containerd[1868]: 2026-01-17 00:02:16.645 [INFO][5089] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" HandleID="k8s-pod-network.00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--kube--controllers--854949db7b--nkqdw-eth0" Jan 17 00:02:16.669490 containerd[1868]: 2026-01-17 00:02:16.645 [INFO][5089] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:16.669490 containerd[1868]: 2026-01-17 00:02:16.651 [INFO][5089] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:16.669490 containerd[1868]: 2026-01-17 00:02:16.664 [WARNING][5089] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" HandleID="k8s-pod-network.00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--kube--controllers--854949db7b--nkqdw-eth0" Jan 17 00:02:16.669490 containerd[1868]: 2026-01-17 00:02:16.664 [INFO][5089] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" HandleID="k8s-pod-network.00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--kube--controllers--854949db7b--nkqdw-eth0" Jan 17 00:02:16.669490 containerd[1868]: 2026-01-17 00:02:16.665 [INFO][5089] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:16.669490 containerd[1868]: 2026-01-17 00:02:16.667 [INFO][5056] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" Jan 17 00:02:16.670098 containerd[1868]: time="2026-01-17T00:02:16.669664443Z" level=info msg="TearDown network for sandbox \"00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e\" successfully" Jan 17 00:02:16.670098 containerd[1868]: time="2026-01-17T00:02:16.669684763Z" level=info msg="StopPodSandbox for \"00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e\" returns successfully" Jan 17 00:02:16.671630 containerd[1868]: time="2026-01-17T00:02:16.671606683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-854949db7b-nkqdw,Uid:d6662ed8-4409-4f39-bb3b-ba711a87545b,Namespace:calico-system,Attempt:1,}" Jan 17 00:02:16.674913 systemd[1]: run-netns-cni\x2d6cd71592\x2d8e75\x2d8d67\x2df34b\x2d380d6e47ac85.mount: Deactivated successfully. Jan 17 00:02:16.730508 kubelet[3363]: E0117 00:02:16.730452 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gjjfd" podUID="7edf5fdf-55d5-4ab4-bb24-67c10b2d9654" Jan 17 00:02:16.861799 systemd-networkd[1412]: cali48f66b60171: Link UP Jan 17 00:02:16.866390 systemd-networkd[1412]: cali48f66b60171: Gained carrier Jan 17 00:02:16.885328 containerd[1868]: 2026-01-17 00:02:16.729 [INFO][5101] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--e1db9b2d97-k8s-csi--node--driver--6zk6p-eth0 csi-node-driver- calico-system e5fabfab-a45e-49bd-b3b5-28097628ac44 920 0 2026-01-17 00:01:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-e1db9b2d97 csi-node-driver-6zk6p eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali48f66b60171 [] [] }} ContainerID="5bd84d1b1ec136a6ebf3d88a90460edb3527ee46d367e58333caca8dba80a8a1" Namespace="calico-system" Pod="csi-node-driver-6zk6p" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-csi--node--driver--6zk6p-" Jan 17 00:02:16.885328 containerd[1868]: 2026-01-17 00:02:16.729 [INFO][5101] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5bd84d1b1ec136a6ebf3d88a90460edb3527ee46d367e58333caca8dba80a8a1" Namespace="calico-system" Pod="csi-node-driver-6zk6p" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-csi--node--driver--6zk6p-eth0" Jan 17 00:02:16.885328 containerd[1868]: 2026-01-17 00:02:16.799 [INFO][5114] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5bd84d1b1ec136a6ebf3d88a90460edb3527ee46d367e58333caca8dba80a8a1" HandleID="k8s-pod-network.5bd84d1b1ec136a6ebf3d88a90460edb3527ee46d367e58333caca8dba80a8a1" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-csi--node--driver--6zk6p-eth0" Jan 17 00:02:16.885328 containerd[1868]: 2026-01-17 00:02:16.799 [INFO][5114] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5bd84d1b1ec136a6ebf3d88a90460edb3527ee46d367e58333caca8dba80a8a1" HandleID="k8s-pod-network.5bd84d1b1ec136a6ebf3d88a90460edb3527ee46d367e58333caca8dba80a8a1" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-csi--node--driver--6zk6p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002aa640), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-e1db9b2d97", "pod":"csi-node-driver-6zk6p", "timestamp":"2026-01-17 00:02:16.799548247 +0000 UTC"}, Hostname:"ci-4081.3.6-n-e1db9b2d97", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:02:16.885328 containerd[1868]: 2026-01-17 00:02:16.799 [INFO][5114] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:16.885328 containerd[1868]: 2026-01-17 00:02:16.800 [INFO][5114] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:16.885328 containerd[1868]: 2026-01-17 00:02:16.800 [INFO][5114] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-e1db9b2d97' Jan 17 00:02:16.885328 containerd[1868]: 2026-01-17 00:02:16.816 [INFO][5114] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5bd84d1b1ec136a6ebf3d88a90460edb3527ee46d367e58333caca8dba80a8a1" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:16.885328 containerd[1868]: 2026-01-17 00:02:16.821 [INFO][5114] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:16.885328 containerd[1868]: 2026-01-17 00:02:16.825 [INFO][5114] ipam/ipam.go 511: Trying affinity for 192.168.47.192/26 host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:16.885328 containerd[1868]: 2026-01-17 00:02:16.828 [INFO][5114] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.192/26 host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:16.885328 containerd[1868]: 2026-01-17 00:02:16.831 [INFO][5114] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.192/26 host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:16.885328 containerd[1868]: 2026-01-17 00:02:16.831 [INFO][5114] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.47.192/26 handle="k8s-pod-network.5bd84d1b1ec136a6ebf3d88a90460edb3527ee46d367e58333caca8dba80a8a1" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:16.885328 containerd[1868]: 2026-01-17 00:02:16.832 [INFO][5114] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5bd84d1b1ec136a6ebf3d88a90460edb3527ee46d367e58333caca8dba80a8a1 Jan 17 00:02:16.885328 containerd[1868]: 2026-01-17 00:02:16.840 [INFO][5114] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.47.192/26 handle="k8s-pod-network.5bd84d1b1ec136a6ebf3d88a90460edb3527ee46d367e58333caca8dba80a8a1" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:16.885328 containerd[1868]: 2026-01-17 00:02:16.848 [INFO][5114] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.47.195/26] block=192.168.47.192/26 handle="k8s-pod-network.5bd84d1b1ec136a6ebf3d88a90460edb3527ee46d367e58333caca8dba80a8a1" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:16.885328 containerd[1868]: 2026-01-17 00:02:16.848 [INFO][5114] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.195/26] handle="k8s-pod-network.5bd84d1b1ec136a6ebf3d88a90460edb3527ee46d367e58333caca8dba80a8a1" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:16.885328 containerd[1868]: 2026-01-17 00:02:16.848 [INFO][5114] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:16.885328 containerd[1868]: 2026-01-17 00:02:16.848 [INFO][5114] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.47.195/26] IPv6=[] ContainerID="5bd84d1b1ec136a6ebf3d88a90460edb3527ee46d367e58333caca8dba80a8a1" HandleID="k8s-pod-network.5bd84d1b1ec136a6ebf3d88a90460edb3527ee46d367e58333caca8dba80a8a1" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-csi--node--driver--6zk6p-eth0" Jan 17 00:02:16.886473 containerd[1868]: 2026-01-17 00:02:16.852 [INFO][5101] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5bd84d1b1ec136a6ebf3d88a90460edb3527ee46d367e58333caca8dba80a8a1" Namespace="calico-system" Pod="csi-node-driver-6zk6p" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-csi--node--driver--6zk6p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e1db9b2d97-k8s-csi--node--driver--6zk6p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e5fabfab-a45e-49bd-b3b5-28097628ac44", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 1, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e1db9b2d97", ContainerID:"", Pod:"csi-node-driver-6zk6p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali48f66b60171", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:16.886473 containerd[1868]: 2026-01-17 00:02:16.852 [INFO][5101] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.195/32] ContainerID="5bd84d1b1ec136a6ebf3d88a90460edb3527ee46d367e58333caca8dba80a8a1" Namespace="calico-system" Pod="csi-node-driver-6zk6p" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-csi--node--driver--6zk6p-eth0" Jan 17 00:02:16.886473 containerd[1868]: 2026-01-17 00:02:16.852 [INFO][5101] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali48f66b60171 ContainerID="5bd84d1b1ec136a6ebf3d88a90460edb3527ee46d367e58333caca8dba80a8a1" Namespace="calico-system" Pod="csi-node-driver-6zk6p" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-csi--node--driver--6zk6p-eth0" Jan 17 00:02:16.886473 containerd[1868]: 2026-01-17 00:02:16.868 [INFO][5101] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5bd84d1b1ec136a6ebf3d88a90460edb3527ee46d367e58333caca8dba80a8a1" Namespace="calico-system" Pod="csi-node-driver-6zk6p" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-csi--node--driver--6zk6p-eth0" Jan 17 00:02:16.886473 containerd[1868]: 2026-01-17 00:02:16.868 [INFO][5101] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5bd84d1b1ec136a6ebf3d88a90460edb3527ee46d367e58333caca8dba80a8a1" Namespace="calico-system" Pod="csi-node-driver-6zk6p" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-csi--node--driver--6zk6p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e1db9b2d97-k8s-csi--node--driver--6zk6p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e5fabfab-a45e-49bd-b3b5-28097628ac44", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 1, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e1db9b2d97", ContainerID:"5bd84d1b1ec136a6ebf3d88a90460edb3527ee46d367e58333caca8dba80a8a1", Pod:"csi-node-driver-6zk6p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali48f66b60171", MAC:"da:65:6f:45:66:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:16.886473 containerd[1868]: 2026-01-17 00:02:16.882 [INFO][5101] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5bd84d1b1ec136a6ebf3d88a90460edb3527ee46d367e58333caca8dba80a8a1" Namespace="calico-system" Pod="csi-node-driver-6zk6p" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-csi--node--driver--6zk6p-eth0" Jan 17 00:02:16.909325 containerd[1868]: time="2026-01-17T00:02:16.909252096Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:16.909325 containerd[1868]: time="2026-01-17T00:02:16.909291976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:16.909325 containerd[1868]: time="2026-01-17T00:02:16.909302056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:16.909667 containerd[1868]: time="2026-01-17T00:02:16.909365416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:16.948101 containerd[1868]: time="2026-01-17T00:02:16.948068765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6zk6p,Uid:e5fabfab-a45e-49bd-b3b5-28097628ac44,Namespace:calico-system,Attempt:1,} returns sandbox id \"5bd84d1b1ec136a6ebf3d88a90460edb3527ee46d367e58333caca8dba80a8a1\"" Jan 17 00:02:16.951297 containerd[1868]: time="2026-01-17T00:02:16.950507605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:02:16.965232 systemd-networkd[1412]: cali0f7d545ae88: Link UP Jan 17 00:02:16.966484 systemd-networkd[1412]: cali0f7d545ae88: Gained carrier Jan 17 00:02:16.989864 containerd[1868]: 2026-01-17 00:02:16.805 [INFO][5115] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--pk2n6-eth0 calico-apiserver-7cb7f6dddc- calico-apiserver 6139bf28-5324-4c65-a1a9-809ea0e0b5cf 922 0 2026-01-17 00:01:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cb7f6dddc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-e1db9b2d97 calico-apiserver-7cb7f6dddc-pk2n6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0f7d545ae88 [] [] }} ContainerID="d68181d151bc3fadeb0ad45bb53ab39f85bcaebe972caaae82cb11c625e0e731" Namespace="calico-apiserver" Pod="calico-apiserver-7cb7f6dddc-pk2n6" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--pk2n6-" Jan 17 00:02:16.989864 containerd[1868]: 2026-01-17 00:02:16.805 [INFO][5115] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d68181d151bc3fadeb0ad45bb53ab39f85bcaebe972caaae82cb11c625e0e731" Namespace="calico-apiserver" Pod="calico-apiserver-7cb7f6dddc-pk2n6" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--pk2n6-eth0" Jan 17 00:02:16.989864 containerd[1868]: 2026-01-17 00:02:16.858 [INFO][5148] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d68181d151bc3fadeb0ad45bb53ab39f85bcaebe972caaae82cb11c625e0e731" HandleID="k8s-pod-network.d68181d151bc3fadeb0ad45bb53ab39f85bcaebe972caaae82cb11c625e0e731" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--pk2n6-eth0" Jan 17 00:02:16.989864 containerd[1868]: 2026-01-17 00:02:16.858 [INFO][5148] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d68181d151bc3fadeb0ad45bb53ab39f85bcaebe972caaae82cb11c625e0e731" HandleID="k8s-pod-network.d68181d151bc3fadeb0ad45bb53ab39f85bcaebe972caaae82cb11c625e0e731" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--pk2n6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000345ee0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-e1db9b2d97", "pod":"calico-apiserver-7cb7f6dddc-pk2n6", "timestamp":"2026-01-17 00:02:16.858136791 +0000 UTC"}, Hostname:"ci-4081.3.6-n-e1db9b2d97", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:02:16.989864 containerd[1868]: 2026-01-17 00:02:16.858 [INFO][5148] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:16.989864 containerd[1868]: 2026-01-17 00:02:16.858 [INFO][5148] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:16.989864 containerd[1868]: 2026-01-17 00:02:16.858 [INFO][5148] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-e1db9b2d97' Jan 17 00:02:16.989864 containerd[1868]: 2026-01-17 00:02:16.917 [INFO][5148] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d68181d151bc3fadeb0ad45bb53ab39f85bcaebe972caaae82cb11c625e0e731" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:16.989864 containerd[1868]: 2026-01-17 00:02:16.921 [INFO][5148] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:16.989864 containerd[1868]: 2026-01-17 00:02:16.927 [INFO][5148] ipam/ipam.go 511: Trying affinity for 192.168.47.192/26 host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:16.989864 containerd[1868]: 2026-01-17 00:02:16.928 [INFO][5148] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.192/26 host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:16.989864 containerd[1868]: 2026-01-17 00:02:16.932 [INFO][5148] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.192/26 host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:16.989864 containerd[1868]: 2026-01-17 00:02:16.932 [INFO][5148] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.47.192/26 handle="k8s-pod-network.d68181d151bc3fadeb0ad45bb53ab39f85bcaebe972caaae82cb11c625e0e731" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:16.989864 containerd[1868]: 2026-01-17 00:02:16.933 [INFO][5148] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d68181d151bc3fadeb0ad45bb53ab39f85bcaebe972caaae82cb11c625e0e731 Jan 17 00:02:16.989864 containerd[1868]: 2026-01-17 00:02:16.941 [INFO][5148] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.47.192/26 handle="k8s-pod-network.d68181d151bc3fadeb0ad45bb53ab39f85bcaebe972caaae82cb11c625e0e731" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:16.989864 containerd[1868]: 2026-01-17 00:02:16.953 [INFO][5148] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.47.196/26] block=192.168.47.192/26 handle="k8s-pod-network.d68181d151bc3fadeb0ad45bb53ab39f85bcaebe972caaae82cb11c625e0e731" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:16.989864 containerd[1868]: 2026-01-17 00:02:16.953 [INFO][5148] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.196/26] handle="k8s-pod-network.d68181d151bc3fadeb0ad45bb53ab39f85bcaebe972caaae82cb11c625e0e731" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:16.989864 containerd[1868]: 2026-01-17 00:02:16.953 [INFO][5148] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:16.989864 containerd[1868]: 2026-01-17 00:02:16.953 [INFO][5148] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.47.196/26] IPv6=[] ContainerID="d68181d151bc3fadeb0ad45bb53ab39f85bcaebe972caaae82cb11c625e0e731" HandleID="k8s-pod-network.d68181d151bc3fadeb0ad45bb53ab39f85bcaebe972caaae82cb11c625e0e731" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--pk2n6-eth0" Jan 17 00:02:16.990558 containerd[1868]: 2026-01-17 00:02:16.957 [INFO][5115] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d68181d151bc3fadeb0ad45bb53ab39f85bcaebe972caaae82cb11c625e0e731" Namespace="calico-apiserver" Pod="calico-apiserver-7cb7f6dddc-pk2n6" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--pk2n6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--pk2n6-eth0", GenerateName:"calico-apiserver-7cb7f6dddc-", Namespace:"calico-apiserver", SelfLink:"", UID:"6139bf28-5324-4c65-a1a9-809ea0e0b5cf", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 1, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cb7f6dddc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e1db9b2d97", ContainerID:"", Pod:"calico-apiserver-7cb7f6dddc-pk2n6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f7d545ae88", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:16.990558 containerd[1868]: 2026-01-17 00:02:16.959 [INFO][5115] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.196/32] ContainerID="d68181d151bc3fadeb0ad45bb53ab39f85bcaebe972caaae82cb11c625e0e731" Namespace="calico-apiserver" Pod="calico-apiserver-7cb7f6dddc-pk2n6" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--pk2n6-eth0" Jan 17 00:02:16.990558 containerd[1868]: 2026-01-17 00:02:16.959 [INFO][5115] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f7d545ae88 ContainerID="d68181d151bc3fadeb0ad45bb53ab39f85bcaebe972caaae82cb11c625e0e731" Namespace="calico-apiserver" Pod="calico-apiserver-7cb7f6dddc-pk2n6" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--pk2n6-eth0" Jan 17 00:02:16.990558 containerd[1868]: 2026-01-17 00:02:16.968 [INFO][5115] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d68181d151bc3fadeb0ad45bb53ab39f85bcaebe972caaae82cb11c625e0e731" Namespace="calico-apiserver" Pod="calico-apiserver-7cb7f6dddc-pk2n6" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--pk2n6-eth0" Jan 17 00:02:16.990558 containerd[1868]: 2026-01-17 00:02:16.969 [INFO][5115] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d68181d151bc3fadeb0ad45bb53ab39f85bcaebe972caaae82cb11c625e0e731" Namespace="calico-apiserver" Pod="calico-apiserver-7cb7f6dddc-pk2n6" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--pk2n6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--pk2n6-eth0", GenerateName:"calico-apiserver-7cb7f6dddc-", Namespace:"calico-apiserver", SelfLink:"", UID:"6139bf28-5324-4c65-a1a9-809ea0e0b5cf", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 1, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cb7f6dddc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e1db9b2d97", ContainerID:"d68181d151bc3fadeb0ad45bb53ab39f85bcaebe972caaae82cb11c625e0e731", Pod:"calico-apiserver-7cb7f6dddc-pk2n6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f7d545ae88", MAC:"e6:fb:e1:a4:42:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:16.990558 containerd[1868]: 2026-01-17 00:02:16.987 [INFO][5115] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d68181d151bc3fadeb0ad45bb53ab39f85bcaebe972caaae82cb11c625e0e731" Namespace="calico-apiserver" Pod="calico-apiserver-7cb7f6dddc-pk2n6" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--pk2n6-eth0" Jan 17 00:02:17.014728 containerd[1868]: time="2026-01-17T00:02:17.014648347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:17.015422 containerd[1868]: time="2026-01-17T00:02:17.015221666Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:17.015422 containerd[1868]: time="2026-01-17T00:02:17.015240866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:17.015422 containerd[1868]: time="2026-01-17T00:02:17.015322426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:17.062136 systemd-networkd[1412]: cali2ae9019d675: Link UP Jan 17 00:02:17.062544 systemd-networkd[1412]: cali2ae9019d675: Gained carrier Jan 17 00:02:17.091843 containerd[1868]: 2026-01-17 00:02:16.817 [INFO][5122] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--e1db9b2d97-k8s-calico--kube--controllers--854949db7b--nkqdw-eth0 calico-kube-controllers-854949db7b- calico-system d6662ed8-4409-4f39-bb3b-ba711a87545b 923 0 2026-01-17 00:01:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:854949db7b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-e1db9b2d97 calico-kube-controllers-854949db7b-nkqdw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2ae9019d675 [] [] }} ContainerID="bcb100d18e58fa9e87165a3ba7ab50083c5a92b862cb5e976198f7f809d101e8" Namespace="calico-system" Pod="calico-kube-controllers-854949db7b-nkqdw" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-calico--kube--controllers--854949db7b--nkqdw-" Jan 17 00:02:17.091843 containerd[1868]: 2026-01-17 00:02:16.817 [INFO][5122] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bcb100d18e58fa9e87165a3ba7ab50083c5a92b862cb5e976198f7f809d101e8" Namespace="calico-system" Pod="calico-kube-controllers-854949db7b-nkqdw" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-calico--kube--controllers--854949db7b--nkqdw-eth0" Jan 17 00:02:17.091843 containerd[1868]: 2026-01-17 00:02:16.864 [INFO][5154] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bcb100d18e58fa9e87165a3ba7ab50083c5a92b862cb5e976198f7f809d101e8" HandleID="k8s-pod-network.bcb100d18e58fa9e87165a3ba7ab50083c5a92b862cb5e976198f7f809d101e8" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--kube--controllers--854949db7b--nkqdw-eth0" Jan 17 00:02:17.091843 containerd[1868]: 2026-01-17 00:02:16.865 [INFO][5154] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bcb100d18e58fa9e87165a3ba7ab50083c5a92b862cb5e976198f7f809d101e8" HandleID="k8s-pod-network.bcb100d18e58fa9e87165a3ba7ab50083c5a92b862cb5e976198f7f809d101e8" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--kube--controllers--854949db7b--nkqdw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-e1db9b2d97", "pod":"calico-kube-controllers-854949db7b-nkqdw", "timestamp":"2026-01-17 00:02:16.864877509 +0000 UTC"}, Hostname:"ci-4081.3.6-n-e1db9b2d97", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:02:17.091843 containerd[1868]: 2026-01-17 00:02:16.865 [INFO][5154] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:17.091843 containerd[1868]: 2026-01-17 00:02:16.953 [INFO][5154] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:17.091843 containerd[1868]: 2026-01-17 00:02:16.953 [INFO][5154] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-e1db9b2d97' Jan 17 00:02:17.091843 containerd[1868]: 2026-01-17 00:02:17.017 [INFO][5154] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bcb100d18e58fa9e87165a3ba7ab50083c5a92b862cb5e976198f7f809d101e8" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:17.091843 containerd[1868]: 2026-01-17 00:02:17.022 [INFO][5154] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:17.091843 containerd[1868]: 2026-01-17 00:02:17.027 [INFO][5154] ipam/ipam.go 511: Trying affinity for 192.168.47.192/26 host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:17.091843 containerd[1868]: 2026-01-17 00:02:17.030 [INFO][5154] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.192/26 host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:17.091843 containerd[1868]: 2026-01-17 00:02:17.032 [INFO][5154] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.192/26 host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:17.091843 containerd[1868]: 2026-01-17 00:02:17.032 [INFO][5154] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.47.192/26 handle="k8s-pod-network.bcb100d18e58fa9e87165a3ba7ab50083c5a92b862cb5e976198f7f809d101e8" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:17.091843 containerd[1868]: 2026-01-17 00:02:17.033 [INFO][5154] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bcb100d18e58fa9e87165a3ba7ab50083c5a92b862cb5e976198f7f809d101e8 Jan 17 00:02:17.091843 containerd[1868]: 2026-01-17 00:02:17.041 [INFO][5154] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.47.192/26 handle="k8s-pod-network.bcb100d18e58fa9e87165a3ba7ab50083c5a92b862cb5e976198f7f809d101e8" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:17.091843 containerd[1868]: 2026-01-17 00:02:17.051 [INFO][5154] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.47.197/26] block=192.168.47.192/26 handle="k8s-pod-network.bcb100d18e58fa9e87165a3ba7ab50083c5a92b862cb5e976198f7f809d101e8" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:17.091843 containerd[1868]: 2026-01-17 00:02:17.051 [INFO][5154] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.197/26] handle="k8s-pod-network.bcb100d18e58fa9e87165a3ba7ab50083c5a92b862cb5e976198f7f809d101e8" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:17.091843 containerd[1868]: 2026-01-17 00:02:17.051 [INFO][5154] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:17.091843 containerd[1868]: 2026-01-17 00:02:17.051 [INFO][5154] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.47.197/26] IPv6=[] ContainerID="bcb100d18e58fa9e87165a3ba7ab50083c5a92b862cb5e976198f7f809d101e8" HandleID="k8s-pod-network.bcb100d18e58fa9e87165a3ba7ab50083c5a92b862cb5e976198f7f809d101e8" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--kube--controllers--854949db7b--nkqdw-eth0" Jan 17 00:02:17.092404 containerd[1868]: 2026-01-17 00:02:17.057 [INFO][5122] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bcb100d18e58fa9e87165a3ba7ab50083c5a92b862cb5e976198f7f809d101e8" Namespace="calico-system" Pod="calico-kube-controllers-854949db7b-nkqdw" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-calico--kube--controllers--854949db7b--nkqdw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e1db9b2d97-k8s-calico--kube--controllers--854949db7b--nkqdw-eth0", GenerateName:"calico-kube-controllers-854949db7b-", Namespace:"calico-system", SelfLink:"", UID:"d6662ed8-4409-4f39-bb3b-ba711a87545b", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 1, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"854949db7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e1db9b2d97", ContainerID:"", Pod:"calico-kube-controllers-854949db7b-nkqdw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2ae9019d675", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:17.092404 containerd[1868]: 2026-01-17 00:02:17.058 [INFO][5122] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.197/32] ContainerID="bcb100d18e58fa9e87165a3ba7ab50083c5a92b862cb5e976198f7f809d101e8" Namespace="calico-system" Pod="calico-kube-controllers-854949db7b-nkqdw" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-calico--kube--controllers--854949db7b--nkqdw-eth0" Jan 17 00:02:17.092404 containerd[1868]: 2026-01-17 00:02:17.058 [INFO][5122] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ae9019d675 ContainerID="bcb100d18e58fa9e87165a3ba7ab50083c5a92b862cb5e976198f7f809d101e8" Namespace="calico-system" Pod="calico-kube-controllers-854949db7b-nkqdw" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-calico--kube--controllers--854949db7b--nkqdw-eth0" Jan 17 00:02:17.092404 containerd[1868]: 2026-01-17 00:02:17.063 [INFO][5122] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bcb100d18e58fa9e87165a3ba7ab50083c5a92b862cb5e976198f7f809d101e8" Namespace="calico-system" Pod="calico-kube-controllers-854949db7b-nkqdw" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-calico--kube--controllers--854949db7b--nkqdw-eth0" Jan 17 00:02:17.092404 containerd[1868]: 2026-01-17 00:02:17.065 [INFO][5122] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bcb100d18e58fa9e87165a3ba7ab50083c5a92b862cb5e976198f7f809d101e8" Namespace="calico-system" Pod="calico-kube-controllers-854949db7b-nkqdw" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-calico--kube--controllers--854949db7b--nkqdw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e1db9b2d97-k8s-calico--kube--controllers--854949db7b--nkqdw-eth0", GenerateName:"calico-kube-controllers-854949db7b-", Namespace:"calico-system", SelfLink:"", UID:"d6662ed8-4409-4f39-bb3b-ba711a87545b", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 1, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"854949db7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e1db9b2d97", ContainerID:"bcb100d18e58fa9e87165a3ba7ab50083c5a92b862cb5e976198f7f809d101e8", Pod:"calico-kube-controllers-854949db7b-nkqdw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2ae9019d675", MAC:"12:6c:ed:d1:8f:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:17.092404 containerd[1868]: 2026-01-17 00:02:17.079 [INFO][5122] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bcb100d18e58fa9e87165a3ba7ab50083c5a92b862cb5e976198f7f809d101e8" Namespace="calico-system" Pod="calico-kube-controllers-854949db7b-nkqdw" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-calico--kube--controllers--854949db7b--nkqdw-eth0" Jan 17 00:02:17.095954 containerd[1868]: time="2026-01-17T00:02:17.095155244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cb7f6dddc-pk2n6,Uid:6139bf28-5324-4c65-a1a9-809ea0e0b5cf,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d68181d151bc3fadeb0ad45bb53ab39f85bcaebe972caaae82cb11c625e0e731\"" Jan 17 00:02:17.116317 containerd[1868]: time="2026-01-17T00:02:17.115567998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:17.116317 containerd[1868]: time="2026-01-17T00:02:17.115616478Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:17.116317 containerd[1868]: time="2026-01-17T00:02:17.115638318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:17.116317 containerd[1868]: time="2026-01-17T00:02:17.115711798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:17.155220 containerd[1868]: time="2026-01-17T00:02:17.155154507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-854949db7b-nkqdw,Uid:d6662ed8-4409-4f39-bb3b-ba711a87545b,Namespace:calico-system,Attempt:1,} returns sandbox id \"bcb100d18e58fa9e87165a3ba7ab50083c5a92b862cb5e976198f7f809d101e8\"" Jan 17 00:02:17.201732 containerd[1868]: time="2026-01-17T00:02:17.201690294Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:02:17.206578 containerd[1868]: time="2026-01-17T00:02:17.206543733Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:02:17.206657 containerd[1868]: time="2026-01-17T00:02:17.206640053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:02:17.206829 kubelet[3363]: E0117 00:02:17.206794 3363 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:02:17.207156 kubelet[3363]: E0117 00:02:17.206839 3363 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:02:17.208819 containerd[1868]: time="2026-01-17T00:02:17.208686932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:02:17.210513 kubelet[3363]: E0117 00:02:17.210466 3363 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zqshz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6zk6p_calico-system(e5fabfab-a45e-49bd-b3b5-28097628ac44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:02:17.252322 systemd-networkd[1412]: cali72035ebe7de: Gained IPv6LL Jan 17 00:02:17.459663 containerd[1868]: time="2026-01-17T00:02:17.459463942Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:02:17.464824 containerd[1868]: time="2026-01-17T00:02:17.464786700Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:02:17.464824 containerd[1868]: time="2026-01-17T00:02:17.464858940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:02:17.465062 kubelet[3363]: E0117 00:02:17.465007 3363 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:02:17.465109 kubelet[3363]: E0117 00:02:17.465074 3363 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:02:17.465537 kubelet[3363]: E0117 00:02:17.465324 3363 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2mg8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cb7f6dddc-pk2n6_calico-apiserver(6139bf28-5324-4c65-a1a9-809ea0e0b5cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:02:17.465666 containerd[1868]: time="2026-01-17T00:02:17.465365900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:02:17.466949 kubelet[3363]: E0117 00:02:17.466916 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7f6dddc-pk2n6" podUID="6139bf28-5324-4c65-a1a9-809ea0e0b5cf" Jan 17 00:02:17.504447 containerd[1868]: time="2026-01-17T00:02:17.504165089Z" level=info msg="StopPodSandbox for \"ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3\"" Jan 17 00:02:17.508414 containerd[1868]: time="2026-01-17T00:02:17.507221208Z" level=info msg="StopPodSandbox for \"d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a\"" Jan 17 00:02:17.606521 containerd[1868]: 2026-01-17 00:02:17.564 [INFO][5331] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" Jan 17 00:02:17.606521 containerd[1868]: 2026-01-17 00:02:17.565 [INFO][5331] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" iface="eth0" netns="/var/run/netns/cni-fc26344f-0754-0890-8dab-964a6e6bd23e" Jan 17 00:02:17.606521 containerd[1868]: 2026-01-17 00:02:17.565 [INFO][5331] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" iface="eth0" netns="/var/run/netns/cni-fc26344f-0754-0890-8dab-964a6e6bd23e" Jan 17 00:02:17.606521 containerd[1868]: 2026-01-17 00:02:17.566 [INFO][5331] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" iface="eth0" netns="/var/run/netns/cni-fc26344f-0754-0890-8dab-964a6e6bd23e" Jan 17 00:02:17.606521 containerd[1868]: 2026-01-17 00:02:17.566 [INFO][5331] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" Jan 17 00:02:17.606521 containerd[1868]: 2026-01-17 00:02:17.566 [INFO][5331] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" Jan 17 00:02:17.606521 containerd[1868]: 2026-01-17 00:02:17.589 [INFO][5345] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" HandleID="k8s-pod-network.ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--pvlhk-eth0" Jan 17 00:02:17.606521 containerd[1868]: 2026-01-17 00:02:17.589 [INFO][5345] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:17.606521 containerd[1868]: 2026-01-17 00:02:17.589 [INFO][5345] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:17.606521 containerd[1868]: 2026-01-17 00:02:17.599 [WARNING][5345] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" HandleID="k8s-pod-network.ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--pvlhk-eth0" Jan 17 00:02:17.606521 containerd[1868]: 2026-01-17 00:02:17.599 [INFO][5345] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" HandleID="k8s-pod-network.ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--pvlhk-eth0" Jan 17 00:02:17.606521 containerd[1868]: 2026-01-17 00:02:17.600 [INFO][5345] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:17.606521 containerd[1868]: 2026-01-17 00:02:17.603 [INFO][5331] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" Jan 17 00:02:17.607616 containerd[1868]: time="2026-01-17T00:02:17.606687740Z" level=info msg="TearDown network for sandbox \"ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3\" successfully" Jan 17 00:02:17.607616 containerd[1868]: time="2026-01-17T00:02:17.606714060Z" level=info msg="StopPodSandbox for \"ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3\" returns successfully" Jan 17 00:02:17.607616 containerd[1868]: time="2026-01-17T00:02:17.607468300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pvlhk,Uid:fcd9ca75-edcd-4265-90e0-090e79b4eb07,Namespace:kube-system,Attempt:1,}" Jan 17 00:02:17.623823 containerd[1868]: 2026-01-17 00:02:17.570 [INFO][5332] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" Jan 17 00:02:17.623823 containerd[1868]: 2026-01-17 00:02:17.570 [INFO][5332] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" iface="eth0" netns="/var/run/netns/cni-5404071e-95f0-128e-28f7-ac9c5977c6eb" Jan 17 00:02:17.623823 containerd[1868]: 2026-01-17 00:02:17.570 [INFO][5332] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" iface="eth0" netns="/var/run/netns/cni-5404071e-95f0-128e-28f7-ac9c5977c6eb" Jan 17 00:02:17.623823 containerd[1868]: 2026-01-17 00:02:17.571 [INFO][5332] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" iface="eth0" netns="/var/run/netns/cni-5404071e-95f0-128e-28f7-ac9c5977c6eb" Jan 17 00:02:17.623823 containerd[1868]: 2026-01-17 00:02:17.571 [INFO][5332] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" Jan 17 00:02:17.623823 containerd[1868]: 2026-01-17 00:02:17.571 [INFO][5332] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" Jan 17 00:02:17.623823 containerd[1868]: 2026-01-17 00:02:17.610 [INFO][5350] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" HandleID="k8s-pod-network.d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--lpdqp-eth0" Jan 17 00:02:17.623823 containerd[1868]: 2026-01-17 00:02:17.610 [INFO][5350] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:17.623823 containerd[1868]: 2026-01-17 00:02:17.611 [INFO][5350] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:17.623823 containerd[1868]: 2026-01-17 00:02:17.619 [WARNING][5350] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" HandleID="k8s-pod-network.d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--lpdqp-eth0" Jan 17 00:02:17.623823 containerd[1868]: 2026-01-17 00:02:17.619 [INFO][5350] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" HandleID="k8s-pod-network.d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--lpdqp-eth0" Jan 17 00:02:17.623823 containerd[1868]: 2026-01-17 00:02:17.620 [INFO][5350] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:17.623823 containerd[1868]: 2026-01-17 00:02:17.622 [INFO][5332] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" Jan 17 00:02:17.624375 containerd[1868]: time="2026-01-17T00:02:17.623938455Z" level=info msg="TearDown network for sandbox \"d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a\" successfully" Jan 17 00:02:17.624375 containerd[1868]: time="2026-01-17T00:02:17.623959015Z" level=info msg="StopPodSandbox for \"d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a\" returns successfully" Jan 17 00:02:17.624870 containerd[1868]: time="2026-01-17T00:02:17.624846375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lpdqp,Uid:26750179-e403-4a5c-a534-2a8e795f3838,Namespace:kube-system,Attempt:1,}" Jan 17 00:02:17.640565 systemd[1]: run-netns-cni\x2dfc26344f\x2d0754\x2d0890\x2d8dab\x2d964a6e6bd23e.mount: Deactivated successfully. Jan 17 00:02:17.640700 systemd[1]: run-netns-cni\x2d5404071e\x2d95f0\x2d128e\x2d28f7\x2dac9c5977c6eb.mount: Deactivated successfully. Jan 17 00:02:17.742756 containerd[1868]: time="2026-01-17T00:02:17.741551862Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:02:17.747269 containerd[1868]: time="2026-01-17T00:02:17.747227741Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:02:17.749074 containerd[1868]: time="2026-01-17T00:02:17.747456741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:02:17.750054 kubelet[3363]: E0117 00:02:17.749826 3363 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:02:17.750269 kubelet[3363]: E0117 00:02:17.750242 3363 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:02:17.750677 containerd[1868]: time="2026-01-17T00:02:17.750654580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:02:17.750895 kubelet[3363]: E0117 00:02:17.750845 3363 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2h2r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-854949db7b-nkqdw_calico-system(d6662ed8-4409-4f39-bb3b-ba711a87545b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:02:17.752947 kubelet[3363]: E0117 00:02:17.752825 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-854949db7b-nkqdw" podUID="d6662ed8-4409-4f39-bb3b-ba711a87545b" Jan 17 00:02:17.757080 kubelet[3363]: E0117 00:02:17.756836 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7f6dddc-pk2n6" podUID="6139bf28-5324-4c65-a1a9-809ea0e0b5cf" Jan 17 00:02:17.761617 kubelet[3363]: E0117 00:02:17.761576 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gjjfd" podUID="7edf5fdf-55d5-4ab4-bb24-67c10b2d9654" Jan 17 00:02:17.859633 systemd-networkd[1412]: caliccdea631770: Link UP Jan 17 00:02:17.864331 systemd-networkd[1412]: caliccdea631770: Gained carrier Jan 17 00:02:17.890808 containerd[1868]: 2026-01-17 00:02:17.718 [INFO][5358] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--pvlhk-eth0 coredns-668d6bf9bc- kube-system fcd9ca75-edcd-4265-90e0-090e79b4eb07 950 0 2026-01-17 00:01:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-e1db9b2d97 coredns-668d6bf9bc-pvlhk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliccdea631770 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="caea7a85ecc0de03c12e1652bb4ea9f819a3aa0397d96d4969624bbf91debc86" Namespace="kube-system" Pod="coredns-668d6bf9bc-pvlhk" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--pvlhk-" Jan 17 00:02:17.890808 containerd[1868]: 2026-01-17 00:02:17.718 [INFO][5358] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="caea7a85ecc0de03c12e1652bb4ea9f819a3aa0397d96d4969624bbf91debc86" Namespace="kube-system" Pod="coredns-668d6bf9bc-pvlhk" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--pvlhk-eth0" Jan 17 00:02:17.890808 containerd[1868]: 2026-01-17 00:02:17.769 [INFO][5381] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="caea7a85ecc0de03c12e1652bb4ea9f819a3aa0397d96d4969624bbf91debc86" HandleID="k8s-pod-network.caea7a85ecc0de03c12e1652bb4ea9f819a3aa0397d96d4969624bbf91debc86" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--pvlhk-eth0" Jan 17 00:02:17.890808 containerd[1868]: 2026-01-17 00:02:17.769 [INFO][5381] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="caea7a85ecc0de03c12e1652bb4ea9f819a3aa0397d96d4969624bbf91debc86" HandleID="k8s-pod-network.caea7a85ecc0de03c12e1652bb4ea9f819a3aa0397d96d4969624bbf91debc86" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--pvlhk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d38f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-e1db9b2d97", "pod":"coredns-668d6bf9bc-pvlhk", "timestamp":"2026-01-17 00:02:17.769065015 +0000 UTC"}, Hostname:"ci-4081.3.6-n-e1db9b2d97", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:02:17.890808 containerd[1868]: 2026-01-17 00:02:17.770 [INFO][5381] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:17.890808 containerd[1868]: 2026-01-17 00:02:17.770 [INFO][5381] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:17.890808 containerd[1868]: 2026-01-17 00:02:17.770 [INFO][5381] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-e1db9b2d97' Jan 17 00:02:17.890808 containerd[1868]: 2026-01-17 00:02:17.790 [INFO][5381] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.caea7a85ecc0de03c12e1652bb4ea9f819a3aa0397d96d4969624bbf91debc86" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:17.890808 containerd[1868]: 2026-01-17 00:02:17.801 [INFO][5381] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:17.890808 containerd[1868]: 2026-01-17 00:02:17.819 [INFO][5381] ipam/ipam.go 511: Trying affinity for 192.168.47.192/26 host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:17.890808 containerd[1868]: 2026-01-17 00:02:17.822 [INFO][5381] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.192/26 host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:17.890808 containerd[1868]: 2026-01-17 00:02:17.825 [INFO][5381] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.192/26 host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:17.890808 containerd[1868]: 2026-01-17 00:02:17.825 [INFO][5381] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.47.192/26 handle="k8s-pod-network.caea7a85ecc0de03c12e1652bb4ea9f819a3aa0397d96d4969624bbf91debc86" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:17.890808 containerd[1868]: 2026-01-17 00:02:17.826 [INFO][5381] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.caea7a85ecc0de03c12e1652bb4ea9f819a3aa0397d96d4969624bbf91debc86 Jan 17 00:02:17.890808 containerd[1868]: 2026-01-17 00:02:17.831 [INFO][5381] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.47.192/26 handle="k8s-pod-network.caea7a85ecc0de03c12e1652bb4ea9f819a3aa0397d96d4969624bbf91debc86" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:17.890808 containerd[1868]: 2026-01-17 00:02:17.839 [INFO][5381] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.47.198/26] block=192.168.47.192/26 handle="k8s-pod-network.caea7a85ecc0de03c12e1652bb4ea9f819a3aa0397d96d4969624bbf91debc86" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:17.890808 containerd[1868]: 2026-01-17 00:02:17.839 [INFO][5381] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.198/26] handle="k8s-pod-network.caea7a85ecc0de03c12e1652bb4ea9f819a3aa0397d96d4969624bbf91debc86" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:17.890808 containerd[1868]: 2026-01-17 00:02:17.839 [INFO][5381] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:17.890808 containerd[1868]: 2026-01-17 00:02:17.839 [INFO][5381] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.47.198/26] IPv6=[] ContainerID="caea7a85ecc0de03c12e1652bb4ea9f819a3aa0397d96d4969624bbf91debc86" HandleID="k8s-pod-network.caea7a85ecc0de03c12e1652bb4ea9f819a3aa0397d96d4969624bbf91debc86" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--pvlhk-eth0" Jan 17 00:02:17.892674 containerd[1868]: 2026-01-17 00:02:17.843 [INFO][5358] cni-plugin/k8s.go 418: Populated endpoint ContainerID="caea7a85ecc0de03c12e1652bb4ea9f819a3aa0397d96d4969624bbf91debc86" Namespace="kube-system" Pod="coredns-668d6bf9bc-pvlhk" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--pvlhk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--pvlhk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fcd9ca75-edcd-4265-90e0-090e79b4eb07", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 1, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e1db9b2d97", ContainerID:"", Pod:"coredns-668d6bf9bc-pvlhk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliccdea631770", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:17.892674 containerd[1868]: 2026-01-17 00:02:17.843 [INFO][5358] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.198/32] ContainerID="caea7a85ecc0de03c12e1652bb4ea9f819a3aa0397d96d4969624bbf91debc86" Namespace="kube-system" Pod="coredns-668d6bf9bc-pvlhk" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--pvlhk-eth0" Jan 17 00:02:17.892674 containerd[1868]: 2026-01-17 00:02:17.843 [INFO][5358] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliccdea631770 ContainerID="caea7a85ecc0de03c12e1652bb4ea9f819a3aa0397d96d4969624bbf91debc86" Namespace="kube-system" Pod="coredns-668d6bf9bc-pvlhk" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--pvlhk-eth0" Jan 17 00:02:17.892674 containerd[1868]: 2026-01-17 00:02:17.866 [INFO][5358] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="caea7a85ecc0de03c12e1652bb4ea9f819a3aa0397d96d4969624bbf91debc86" Namespace="kube-system" Pod="coredns-668d6bf9bc-pvlhk" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--pvlhk-eth0" Jan 17 00:02:17.892674 containerd[1868]: 2026-01-17 00:02:17.867 [INFO][5358] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="caea7a85ecc0de03c12e1652bb4ea9f819a3aa0397d96d4969624bbf91debc86" Namespace="kube-system" Pod="coredns-668d6bf9bc-pvlhk" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--pvlhk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--pvlhk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fcd9ca75-edcd-4265-90e0-090e79b4eb07", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 1, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e1db9b2d97", ContainerID:"caea7a85ecc0de03c12e1652bb4ea9f819a3aa0397d96d4969624bbf91debc86", Pod:"coredns-668d6bf9bc-pvlhk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliccdea631770", MAC:"2e:fa:ab:fd:d4:9b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:17.893161 containerd[1868]: 2026-01-17 00:02:17.884 [INFO][5358] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="caea7a85ecc0de03c12e1652bb4ea9f819a3aa0397d96d4969624bbf91debc86" Namespace="kube-system" Pod="coredns-668d6bf9bc-pvlhk" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--pvlhk-eth0" Jan 17 00:02:17.919701 containerd[1868]: time="2026-01-17T00:02:17.919610652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:17.919981 containerd[1868]: time="2026-01-17T00:02:17.919876932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:17.919981 containerd[1868]: time="2026-01-17T00:02:17.919898292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:17.920150 containerd[1868]: time="2026-01-17T00:02:17.920113132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:17.957331 systemd-networkd[1412]: cali48f66b60171: Gained IPv6LL Jan 17 00:02:17.978738 systemd-networkd[1412]: calif2f0703ebe1: Link UP Jan 17 00:02:17.979580 systemd-networkd[1412]: calif2f0703ebe1: Gained carrier Jan 17 00:02:18.010925 containerd[1868]: 2026-01-17 00:02:17.746 [INFO][5369] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--lpdqp-eth0 coredns-668d6bf9bc- kube-system 26750179-e403-4a5c-a534-2a8e795f3838 951 0 2026-01-17 00:01:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-e1db9b2d97 coredns-668d6bf9bc-lpdqp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif2f0703ebe1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8d99eefa630ef0820a86ab6d835196a5a9981256b1f3ff5f87a5ea5a0c8e4907" Namespace="kube-system" Pod="coredns-668d6bf9bc-lpdqp" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--lpdqp-" Jan 17 00:02:18.010925 containerd[1868]: 2026-01-17 00:02:17.746 [INFO][5369] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8d99eefa630ef0820a86ab6d835196a5a9981256b1f3ff5f87a5ea5a0c8e4907" Namespace="kube-system" Pod="coredns-668d6bf9bc-lpdqp" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--lpdqp-eth0" Jan 17 00:02:18.010925 containerd[1868]: 2026-01-17 00:02:17.802 [INFO][5388] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8d99eefa630ef0820a86ab6d835196a5a9981256b1f3ff5f87a5ea5a0c8e4907" HandleID="k8s-pod-network.8d99eefa630ef0820a86ab6d835196a5a9981256b1f3ff5f87a5ea5a0c8e4907" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--lpdqp-eth0" Jan 17 00:02:18.010925 containerd[1868]: 2026-01-17 00:02:17.803 [INFO][5388] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8d99eefa630ef0820a86ab6d835196a5a9981256b1f3ff5f87a5ea5a0c8e4907" HandleID="k8s-pod-network.8d99eefa630ef0820a86ab6d835196a5a9981256b1f3ff5f87a5ea5a0c8e4907" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--lpdqp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab210), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-e1db9b2d97", "pod":"coredns-668d6bf9bc-lpdqp", "timestamp":"2026-01-17 00:02:17.802989445 +0000 UTC"}, Hostname:"ci-4081.3.6-n-e1db9b2d97", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:02:18.010925 containerd[1868]: 2026-01-17 00:02:17.803 [INFO][5388] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:18.010925 containerd[1868]: 2026-01-17 00:02:17.840 [INFO][5388] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:18.010925 containerd[1868]: 2026-01-17 00:02:17.840 [INFO][5388] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-e1db9b2d97' Jan 17 00:02:18.010925 containerd[1868]: 2026-01-17 00:02:17.886 [INFO][5388] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8d99eefa630ef0820a86ab6d835196a5a9981256b1f3ff5f87a5ea5a0c8e4907" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:18.010925 containerd[1868]: 2026-01-17 00:02:17.902 [INFO][5388] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:18.010925 containerd[1868]: 2026-01-17 00:02:17.926 [INFO][5388] ipam/ipam.go 511: Trying affinity for 192.168.47.192/26 host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:18.010925 containerd[1868]: 2026-01-17 00:02:17.928 [INFO][5388] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.192/26 host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:18.010925 containerd[1868]: 2026-01-17 00:02:17.930 [INFO][5388] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.192/26 host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:18.010925 containerd[1868]: 2026-01-17 00:02:17.930 [INFO][5388] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.47.192/26 handle="k8s-pod-network.8d99eefa630ef0820a86ab6d835196a5a9981256b1f3ff5f87a5ea5a0c8e4907" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:18.010925 containerd[1868]: 2026-01-17 00:02:17.933 [INFO][5388] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8d99eefa630ef0820a86ab6d835196a5a9981256b1f3ff5f87a5ea5a0c8e4907 Jan 17 00:02:18.010925 containerd[1868]: 2026-01-17 00:02:17.945 [INFO][5388] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.47.192/26 handle="k8s-pod-network.8d99eefa630ef0820a86ab6d835196a5a9981256b1f3ff5f87a5ea5a0c8e4907" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:18.010925 containerd[1868]: 2026-01-17 00:02:17.961 [INFO][5388] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.47.199/26] block=192.168.47.192/26 handle="k8s-pod-network.8d99eefa630ef0820a86ab6d835196a5a9981256b1f3ff5f87a5ea5a0c8e4907" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:18.010925 containerd[1868]: 2026-01-17 00:02:17.962 [INFO][5388] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.199/26] handle="k8s-pod-network.8d99eefa630ef0820a86ab6d835196a5a9981256b1f3ff5f87a5ea5a0c8e4907" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:18.010925 containerd[1868]: 2026-01-17 00:02:17.962 [INFO][5388] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:18.010925 containerd[1868]: 2026-01-17 00:02:17.962 [INFO][5388] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.47.199/26] IPv6=[] ContainerID="8d99eefa630ef0820a86ab6d835196a5a9981256b1f3ff5f87a5ea5a0c8e4907" HandleID="k8s-pod-network.8d99eefa630ef0820a86ab6d835196a5a9981256b1f3ff5f87a5ea5a0c8e4907" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--lpdqp-eth0" Jan 17 00:02:18.011486 containerd[1868]: 2026-01-17 00:02:17.970 [INFO][5369] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8d99eefa630ef0820a86ab6d835196a5a9981256b1f3ff5f87a5ea5a0c8e4907" Namespace="kube-system" Pod="coredns-668d6bf9bc-lpdqp" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--lpdqp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--lpdqp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"26750179-e403-4a5c-a534-2a8e795f3838", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 1, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e1db9b2d97", ContainerID:"", Pod:"coredns-668d6bf9bc-lpdqp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2f0703ebe1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:18.011486 containerd[1868]: 2026-01-17 00:02:17.972 [INFO][5369] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.199/32] ContainerID="8d99eefa630ef0820a86ab6d835196a5a9981256b1f3ff5f87a5ea5a0c8e4907" Namespace="kube-system" Pod="coredns-668d6bf9bc-lpdqp" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--lpdqp-eth0" Jan 17 00:02:18.011486 containerd[1868]: 2026-01-17 00:02:17.972 [INFO][5369] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2f0703ebe1 ContainerID="8d99eefa630ef0820a86ab6d835196a5a9981256b1f3ff5f87a5ea5a0c8e4907" Namespace="kube-system" Pod="coredns-668d6bf9bc-lpdqp" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--lpdqp-eth0" Jan 17 00:02:18.011486 containerd[1868]: 2026-01-17 00:02:17.980 [INFO][5369] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8d99eefa630ef0820a86ab6d835196a5a9981256b1f3ff5f87a5ea5a0c8e4907" Namespace="kube-system" Pod="coredns-668d6bf9bc-lpdqp" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--lpdqp-eth0" Jan 17 00:02:18.011486 containerd[1868]: 2026-01-17 00:02:17.984 [INFO][5369] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8d99eefa630ef0820a86ab6d835196a5a9981256b1f3ff5f87a5ea5a0c8e4907" Namespace="kube-system" Pod="coredns-668d6bf9bc-lpdqp" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--lpdqp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--lpdqp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"26750179-e403-4a5c-a534-2a8e795f3838", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 1, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e1db9b2d97", ContainerID:"8d99eefa630ef0820a86ab6d835196a5a9981256b1f3ff5f87a5ea5a0c8e4907", Pod:"coredns-668d6bf9bc-lpdqp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2f0703ebe1", MAC:"4a:3c:c9:f2:51:07", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:18.015277 containerd[1868]: 2026-01-17 00:02:17.997 [INFO][5369] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8d99eefa630ef0820a86ab6d835196a5a9981256b1f3ff5f87a5ea5a0c8e4907" Namespace="kube-system" Pod="coredns-668d6bf9bc-lpdqp" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--lpdqp-eth0" Jan 17 00:02:18.020393 containerd[1868]: time="2026-01-17T00:02:18.020340584Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:02:18.021303 systemd-networkd[1412]: cali0f7d545ae88: Gained IPv6LL Jan 17 00:02:18.023898 containerd[1868]: time="2026-01-17T00:02:18.022957743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pvlhk,Uid:fcd9ca75-edcd-4265-90e0-090e79b4eb07,Namespace:kube-system,Attempt:1,} returns sandbox id \"caea7a85ecc0de03c12e1652bb4ea9f819a3aa0397d96d4969624bbf91debc86\"" Jan 17 00:02:18.035856 containerd[1868]: time="2026-01-17T00:02:18.035801180Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:02:18.035974 containerd[1868]: time="2026-01-17T00:02:18.035894420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:02:18.036037 kubelet[3363]: E0117 00:02:18.036004 3363 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:02:18.036100 kubelet[3363]: E0117 00:02:18.036046 3363 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:02:18.037543 containerd[1868]: time="2026-01-17T00:02:18.037513179Z" level=info msg="CreateContainer within sandbox \"caea7a85ecc0de03c12e1652bb4ea9f819a3aa0397d96d4969624bbf91debc86\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:02:18.037631 kubelet[3363]: E0117 00:02:18.036166 3363 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zqshz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6zk6p_calico-system(e5fabfab-a45e-49bd-b3b5-28097628ac44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:02:18.040009 kubelet[3363]: E0117 00:02:18.039944 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6zk6p" podUID="e5fabfab-a45e-49bd-b3b5-28097628ac44" Jan 17 00:02:18.058298 containerd[1868]: time="2026-01-17T00:02:18.058128614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:18.058298 containerd[1868]: time="2026-01-17T00:02:18.058239134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:18.058298 containerd[1868]: time="2026-01-17T00:02:18.058263734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:18.058899 containerd[1868]: time="2026-01-17T00:02:18.058490413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:18.082778 containerd[1868]: time="2026-01-17T00:02:18.082727807Z" level=info msg="CreateContainer within sandbox \"caea7a85ecc0de03c12e1652bb4ea9f819a3aa0397d96d4969624bbf91debc86\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1f71690a3ae355aa81a0aba5f6de7c32e7b7094ee4731f87c933a0eabb690810\"" Jan 17 00:02:18.084809 containerd[1868]: time="2026-01-17T00:02:18.083945926Z" level=info msg="StartContainer for \"1f71690a3ae355aa81a0aba5f6de7c32e7b7094ee4731f87c933a0eabb690810\"" Jan 17 00:02:18.110744 containerd[1868]: time="2026-01-17T00:02:18.110709239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lpdqp,Uid:26750179-e403-4a5c-a534-2a8e795f3838,Namespace:kube-system,Attempt:1,} returns sandbox id \"8d99eefa630ef0820a86ab6d835196a5a9981256b1f3ff5f87a5ea5a0c8e4907\"" Jan 17 00:02:18.116301 containerd[1868]: time="2026-01-17T00:02:18.116088917Z" level=info msg="CreateContainer within sandbox \"8d99eefa630ef0820a86ab6d835196a5a9981256b1f3ff5f87a5ea5a0c8e4907\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:02:18.143596 containerd[1868]: time="2026-01-17T00:02:18.143530950Z" level=info msg="StartContainer for \"1f71690a3ae355aa81a0aba5f6de7c32e7b7094ee4731f87c933a0eabb690810\" returns successfully" Jan 17 00:02:18.161692 containerd[1868]: time="2026-01-17T00:02:18.161466145Z" level=info msg="CreateContainer within sandbox \"8d99eefa630ef0820a86ab6d835196a5a9981256b1f3ff5f87a5ea5a0c8e4907\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2679837ca34575927b4ccf48ab6b072f7358713a11a754d74dc677e81d1b46dc\"" Jan 17 00:02:18.162181 containerd[1868]: time="2026-01-17T00:02:18.162128224Z" level=info msg="StartContainer for \"2679837ca34575927b4ccf48ab6b072f7358713a11a754d74dc677e81d1b46dc\"" Jan 17 00:02:18.233998 containerd[1868]: time="2026-01-17T00:02:18.233928964Z" level=info msg="StartContainer for \"2679837ca34575927b4ccf48ab6b072f7358713a11a754d74dc677e81d1b46dc\" returns successfully" Jan 17 00:02:18.468418 systemd-networkd[1412]: cali2ae9019d675: Gained IPv6LL Jan 17 00:02:18.503587 containerd[1868]: time="2026-01-17T00:02:18.503504969Z" level=info msg="StopPodSandbox for \"fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0\"" Jan 17 00:02:18.588419 containerd[1868]: 2026-01-17 00:02:18.555 [INFO][5584] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" Jan 17 00:02:18.588419 containerd[1868]: 2026-01-17 00:02:18.555 [INFO][5584] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" iface="eth0" netns="/var/run/netns/cni-a3bb47d2-8968-7855-25eb-90a5dda8c6d5" Jan 17 00:02:18.588419 containerd[1868]: 2026-01-17 00:02:18.555 [INFO][5584] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" iface="eth0" netns="/var/run/netns/cni-a3bb47d2-8968-7855-25eb-90a5dda8c6d5" Jan 17 00:02:18.588419 containerd[1868]: 2026-01-17 00:02:18.556 [INFO][5584] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" iface="eth0" netns="/var/run/netns/cni-a3bb47d2-8968-7855-25eb-90a5dda8c6d5" Jan 17 00:02:18.588419 containerd[1868]: 2026-01-17 00:02:18.556 [INFO][5584] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" Jan 17 00:02:18.588419 containerd[1868]: 2026-01-17 00:02:18.556 [INFO][5584] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" Jan 17 00:02:18.588419 containerd[1868]: 2026-01-17 00:02:18.573 [INFO][5591] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" HandleID="k8s-pod-network.fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--5gx8p-eth0" Jan 17 00:02:18.588419 containerd[1868]: 2026-01-17 00:02:18.573 [INFO][5591] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:18.588419 containerd[1868]: 2026-01-17 00:02:18.573 [INFO][5591] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:18.588419 containerd[1868]: 2026-01-17 00:02:18.583 [WARNING][5591] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" HandleID="k8s-pod-network.fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--5gx8p-eth0" Jan 17 00:02:18.588419 containerd[1868]: 2026-01-17 00:02:18.584 [INFO][5591] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" HandleID="k8s-pod-network.fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--5gx8p-eth0" Jan 17 00:02:18.588419 containerd[1868]: 2026-01-17 00:02:18.585 [INFO][5591] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:18.588419 containerd[1868]: 2026-01-17 00:02:18.586 [INFO][5584] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" Jan 17 00:02:18.589330 containerd[1868]: time="2026-01-17T00:02:18.589050304Z" level=info msg="TearDown network for sandbox \"fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0\" successfully" Jan 17 00:02:18.589330 containerd[1868]: time="2026-01-17T00:02:18.589081864Z" level=info msg="StopPodSandbox for \"fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0\" returns successfully" Jan 17 00:02:18.589853 containerd[1868]: time="2026-01-17T00:02:18.589827984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cb7f6dddc-5gx8p,Uid:7e6f1f60-5b4d-4f6a-92be-ef48b02574bd,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:02:18.639368 systemd[1]: run-netns-cni\x2da3bb47d2\x2d8968\x2d7855\x2d25eb\x2d90a5dda8c6d5.mount: Deactivated successfully. Jan 17 00:02:18.729791 systemd-networkd[1412]: cali489b60b1142: Link UP Jan 17 00:02:18.730901 systemd-networkd[1412]: cali489b60b1142: Gained carrier Jan 17 00:02:18.747463 containerd[1868]: 2026-01-17 00:02:18.667 [INFO][5597] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--5gx8p-eth0 calico-apiserver-7cb7f6dddc- calico-apiserver 7e6f1f60-5b4d-4f6a-92be-ef48b02574bd 981 0 2026-01-17 00:01:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cb7f6dddc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-e1db9b2d97 calico-apiserver-7cb7f6dddc-5gx8p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali489b60b1142 [] [] }} ContainerID="d83e978e76e7ab733b11d495e0dd640b5308f626ba666830f03f1c45a34d852e" Namespace="calico-apiserver" Pod="calico-apiserver-7cb7f6dddc-5gx8p" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--5gx8p-" Jan 17 00:02:18.747463 containerd[1868]: 2026-01-17 00:02:18.667 [INFO][5597] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d83e978e76e7ab733b11d495e0dd640b5308f626ba666830f03f1c45a34d852e" Namespace="calico-apiserver" Pod="calico-apiserver-7cb7f6dddc-5gx8p" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--5gx8p-eth0" Jan 17 00:02:18.747463 containerd[1868]: 2026-01-17 00:02:18.687 [INFO][5610] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d83e978e76e7ab733b11d495e0dd640b5308f626ba666830f03f1c45a34d852e" HandleID="k8s-pod-network.d83e978e76e7ab733b11d495e0dd640b5308f626ba666830f03f1c45a34d852e" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--5gx8p-eth0" Jan 17 00:02:18.747463 containerd[1868]: 2026-01-17 00:02:18.687 [INFO][5610] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d83e978e76e7ab733b11d495e0dd640b5308f626ba666830f03f1c45a34d852e" HandleID="k8s-pod-network.d83e978e76e7ab733b11d495e0dd640b5308f626ba666830f03f1c45a34d852e" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--5gx8p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002aa4c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-e1db9b2d97", "pod":"calico-apiserver-7cb7f6dddc-5gx8p", "timestamp":"2026-01-17 00:02:18.687163517 +0000 UTC"}, Hostname:"ci-4081.3.6-n-e1db9b2d97", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:02:18.747463 containerd[1868]: 2026-01-17 00:02:18.687 [INFO][5610] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:18.747463 containerd[1868]: 2026-01-17 00:02:18.687 [INFO][5610] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:18.747463 containerd[1868]: 2026-01-17 00:02:18.687 [INFO][5610] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-e1db9b2d97' Jan 17 00:02:18.747463 containerd[1868]: 2026-01-17 00:02:18.695 [INFO][5610] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d83e978e76e7ab733b11d495e0dd640b5308f626ba666830f03f1c45a34d852e" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:18.747463 containerd[1868]: 2026-01-17 00:02:18.699 [INFO][5610] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:18.747463 containerd[1868]: 2026-01-17 00:02:18.702 [INFO][5610] ipam/ipam.go 511: Trying affinity for 192.168.47.192/26 host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:18.747463 containerd[1868]: 2026-01-17 00:02:18.703 [INFO][5610] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.192/26 host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:18.747463 containerd[1868]: 2026-01-17 00:02:18.705 [INFO][5610] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.192/26 host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:18.747463 containerd[1868]: 2026-01-17 00:02:18.705 [INFO][5610] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.47.192/26 handle="k8s-pod-network.d83e978e76e7ab733b11d495e0dd640b5308f626ba666830f03f1c45a34d852e" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:18.747463 containerd[1868]: 2026-01-17 00:02:18.706 [INFO][5610] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d83e978e76e7ab733b11d495e0dd640b5308f626ba666830f03f1c45a34d852e Jan 17 00:02:18.747463 containerd[1868]: 2026-01-17 00:02:18.714 [INFO][5610] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.47.192/26 handle="k8s-pod-network.d83e978e76e7ab733b11d495e0dd640b5308f626ba666830f03f1c45a34d852e" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:18.747463 containerd[1868]: 2026-01-17 00:02:18.723 [INFO][5610] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.47.200/26] block=192.168.47.192/26 handle="k8s-pod-network.d83e978e76e7ab733b11d495e0dd640b5308f626ba666830f03f1c45a34d852e" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:18.747463 containerd[1868]: 2026-01-17 00:02:18.723 [INFO][5610] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.200/26] handle="k8s-pod-network.d83e978e76e7ab733b11d495e0dd640b5308f626ba666830f03f1c45a34d852e" host="ci-4081.3.6-n-e1db9b2d97" Jan 17 00:02:18.747463 containerd[1868]: 2026-01-17 00:02:18.723 [INFO][5610] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:18.747463 containerd[1868]: 2026-01-17 00:02:18.723 [INFO][5610] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.47.200/26] IPv6=[] ContainerID="d83e978e76e7ab733b11d495e0dd640b5308f626ba666830f03f1c45a34d852e" HandleID="k8s-pod-network.d83e978e76e7ab733b11d495e0dd640b5308f626ba666830f03f1c45a34d852e" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--5gx8p-eth0" Jan 17 00:02:18.748037 containerd[1868]: 2026-01-17 00:02:18.726 [INFO][5597] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d83e978e76e7ab733b11d495e0dd640b5308f626ba666830f03f1c45a34d852e" Namespace="calico-apiserver" Pod="calico-apiserver-7cb7f6dddc-5gx8p" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--5gx8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--5gx8p-eth0", GenerateName:"calico-apiserver-7cb7f6dddc-", Namespace:"calico-apiserver", SelfLink:"", UID:"7e6f1f60-5b4d-4f6a-92be-ef48b02574bd", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 1, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cb7f6dddc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e1db9b2d97", ContainerID:"", Pod:"calico-apiserver-7cb7f6dddc-5gx8p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali489b60b1142", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:18.748037 containerd[1868]: 2026-01-17 00:02:18.726 [INFO][5597] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.200/32] ContainerID="d83e978e76e7ab733b11d495e0dd640b5308f626ba666830f03f1c45a34d852e" Namespace="calico-apiserver" Pod="calico-apiserver-7cb7f6dddc-5gx8p" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--5gx8p-eth0" Jan 17 00:02:18.748037 containerd[1868]: 2026-01-17 00:02:18.726 [INFO][5597] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali489b60b1142 ContainerID="d83e978e76e7ab733b11d495e0dd640b5308f626ba666830f03f1c45a34d852e" Namespace="calico-apiserver" Pod="calico-apiserver-7cb7f6dddc-5gx8p" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--5gx8p-eth0" Jan 17 00:02:18.748037 containerd[1868]: 2026-01-17 00:02:18.730 [INFO][5597] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d83e978e76e7ab733b11d495e0dd640b5308f626ba666830f03f1c45a34d852e" Namespace="calico-apiserver" Pod="calico-apiserver-7cb7f6dddc-5gx8p" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--5gx8p-eth0" Jan 17 00:02:18.748037 containerd[1868]: 2026-01-17 00:02:18.731 [INFO][5597] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d83e978e76e7ab733b11d495e0dd640b5308f626ba666830f03f1c45a34d852e" Namespace="calico-apiserver" Pod="calico-apiserver-7cb7f6dddc-5gx8p" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--5gx8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--5gx8p-eth0", GenerateName:"calico-apiserver-7cb7f6dddc-", Namespace:"calico-apiserver", SelfLink:"", UID:"7e6f1f60-5b4d-4f6a-92be-ef48b02574bd", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 1, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cb7f6dddc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e1db9b2d97", ContainerID:"d83e978e76e7ab733b11d495e0dd640b5308f626ba666830f03f1c45a34d852e", Pod:"calico-apiserver-7cb7f6dddc-5gx8p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali489b60b1142", MAC:"ca:44:a9:48:a1:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:18.748037 containerd[1868]: 2026-01-17 00:02:18.742 [INFO][5597] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d83e978e76e7ab733b11d495e0dd640b5308f626ba666830f03f1c45a34d852e" Namespace="calico-apiserver" Pod="calico-apiserver-7cb7f6dddc-5gx8p" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--5gx8p-eth0" Jan 17 00:02:18.770157 kubelet[3363]: E0117 00:02:18.769499 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7f6dddc-pk2n6" podUID="6139bf28-5324-4c65-a1a9-809ea0e0b5cf" Jan 17 00:02:18.770157 kubelet[3363]: E0117 00:02:18.769634 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-854949db7b-nkqdw" podUID="d6662ed8-4409-4f39-bb3b-ba711a87545b" Jan 17 00:02:18.771749 kubelet[3363]: E0117 00:02:18.771596 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6zk6p" podUID="e5fabfab-a45e-49bd-b3b5-28097628ac44" Jan 17 00:02:18.782063 kubelet[3363]: I0117 00:02:18.781805 3363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-pvlhk" podStartSLOduration=43.78179169 podStartE2EDuration="43.78179169s" podCreationTimestamp="2026-01-17 00:01:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:02:18.779070531 +0000 UTC m=+49.384375805" watchObservedRunningTime="2026-01-17 00:02:18.78179169 +0000 UTC m=+49.387096964" Jan 17 00:02:18.786473 containerd[1868]: time="2026-01-17T00:02:18.785288929Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:18.786473 containerd[1868]: time="2026-01-17T00:02:18.785341969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:18.786473 containerd[1868]: time="2026-01-17T00:02:18.785357849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:18.786473 containerd[1868]: time="2026-01-17T00:02:18.785458409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:18.874004 containerd[1868]: time="2026-01-17T00:02:18.873968304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cb7f6dddc-5gx8p,Uid:7e6f1f60-5b4d-4f6a-92be-ef48b02574bd,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d83e978e76e7ab733b11d495e0dd640b5308f626ba666830f03f1c45a34d852e\"" Jan 17 00:02:18.879046 containerd[1868]: time="2026-01-17T00:02:18.878165343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:02:18.894285 kubelet[3363]: I0117 00:02:18.894231 3363 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lpdqp" podStartSLOduration=43.893337379 podStartE2EDuration="43.893337379s" podCreationTimestamp="2026-01-17 00:01:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:02:18.89123162 +0000 UTC m=+49.496536894" watchObservedRunningTime="2026-01-17 00:02:18.893337379 +0000 UTC m=+49.498642653" Jan 17 00:02:19.152779 containerd[1868]: time="2026-01-17T00:02:19.152684386Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:02:19.159637 containerd[1868]: time="2026-01-17T00:02:19.159600184Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:02:19.159716 containerd[1868]: time="2026-01-17T00:02:19.159688264Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:02:19.159850 kubelet[3363]: E0117 00:02:19.159816 3363 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:02:19.159899 kubelet[3363]: E0117 00:02:19.159862 3363 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:02:19.160268 kubelet[3363]: E0117 00:02:19.160223 3363 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sdr7c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cb7f6dddc-5gx8p_calico-apiserver(7e6f1f60-5b4d-4f6a-92be-ef48b02574bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:02:19.161592 kubelet[3363]: E0117 00:02:19.161376 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7f6dddc-5gx8p" podUID="7e6f1f60-5b4d-4f6a-92be-ef48b02574bd" Jan 17 00:02:19.492327 systemd-networkd[1412]: caliccdea631770: Gained IPv6LL Jan 17 00:02:19.769264 kubelet[3363]: E0117 00:02:19.768496 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7f6dddc-5gx8p" podUID="7e6f1f60-5b4d-4f6a-92be-ef48b02574bd" Jan 17 00:02:19.771680 kubelet[3363]: E0117 00:02:19.771622 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-854949db7b-nkqdw" podUID="d6662ed8-4409-4f39-bb3b-ba711a87545b" Jan 17 00:02:19.876440 systemd-networkd[1412]: calif2f0703ebe1: Gained IPv6LL Jan 17 00:02:20.068343 systemd-networkd[1412]: cali489b60b1142: Gained IPv6LL Jan 17 00:02:20.773353 kubelet[3363]: E0117 00:02:20.773276 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7f6dddc-5gx8p" podUID="7e6f1f60-5b4d-4f6a-92be-ef48b02574bd" Jan 17 00:02:23.506594 containerd[1868]: time="2026-01-17T00:02:23.506461262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:02:23.778704 containerd[1868]: time="2026-01-17T00:02:23.778630994Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:02:23.781939 containerd[1868]: time="2026-01-17T00:02:23.781886233Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:02:23.782111 containerd[1868]: time="2026-01-17T00:02:23.781968153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:02:23.782144 kubelet[3363]: E0117 00:02:23.782095 3363 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:02:23.782144 kubelet[3363]: E0117 00:02:23.782139 3363 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:02:23.782646 kubelet[3363]: E0117 00:02:23.782246 3363 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2f3b6028e4fe4b7a9477513ac0bfee1b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-skdtr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8f9bbb7c5-bvnkf_calico-system(a8e344f5-3821-4ec4-a6da-be956667501d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:02:23.785352 containerd[1868]: time="2026-01-17T00:02:23.785308952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:02:24.028090 containerd[1868]: time="2026-01-17T00:02:24.028032772Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:02:24.031462 containerd[1868]: time="2026-01-17T00:02:24.031346291Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:02:24.031462 containerd[1868]: time="2026-01-17T00:02:24.031436811Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:02:24.032486 kubelet[3363]: E0117 00:02:24.031996 3363 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:02:24.032486 kubelet[3363]: E0117 00:02:24.032042 3363 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:02:24.032486 kubelet[3363]: E0117 00:02:24.032137 3363 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-skdtr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8f9bbb7c5-bvnkf_calico-system(a8e344f5-3821-4ec4-a6da-be956667501d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:02:24.033742 kubelet[3363]: E0117 00:02:24.033635 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8f9bbb7c5-bvnkf" podUID="a8e344f5-3821-4ec4-a6da-be956667501d" Jan 17 00:02:29.518911 containerd[1868]: time="2026-01-17T00:02:29.518660099Z" level=info msg="StopPodSandbox for \"ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3\"" Jan 17 00:02:29.587334 containerd[1868]: 2026-01-17 00:02:29.549 [WARNING][5694] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--pvlhk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fcd9ca75-edcd-4265-90e0-090e79b4eb07", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 1, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e1db9b2d97", ContainerID:"caea7a85ecc0de03c12e1652bb4ea9f819a3aa0397d96d4969624bbf91debc86", Pod:"coredns-668d6bf9bc-pvlhk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliccdea631770", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:29.587334 containerd[1868]: 2026-01-17 00:02:29.551 [INFO][5694] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" Jan 17 00:02:29.587334 containerd[1868]: 2026-01-17 00:02:29.551 [INFO][5694] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" iface="eth0" netns="" Jan 17 00:02:29.587334 containerd[1868]: 2026-01-17 00:02:29.551 [INFO][5694] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" Jan 17 00:02:29.587334 containerd[1868]: 2026-01-17 00:02:29.551 [INFO][5694] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" Jan 17 00:02:29.587334 containerd[1868]: 2026-01-17 00:02:29.573 [INFO][5701] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" HandleID="k8s-pod-network.ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--pvlhk-eth0" Jan 17 00:02:29.587334 containerd[1868]: 2026-01-17 00:02:29.573 [INFO][5701] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:29.587334 containerd[1868]: 2026-01-17 00:02:29.573 [INFO][5701] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:29.587334 containerd[1868]: 2026-01-17 00:02:29.583 [WARNING][5701] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" HandleID="k8s-pod-network.ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--pvlhk-eth0" Jan 17 00:02:29.587334 containerd[1868]: 2026-01-17 00:02:29.583 [INFO][5701] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" HandleID="k8s-pod-network.ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--pvlhk-eth0" Jan 17 00:02:29.587334 containerd[1868]: 2026-01-17 00:02:29.584 [INFO][5701] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:29.587334 containerd[1868]: 2026-01-17 00:02:29.585 [INFO][5694] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" Jan 17 00:02:29.587750 containerd[1868]: time="2026-01-17T00:02:29.587370954Z" level=info msg="TearDown network for sandbox \"ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3\" successfully" Jan 17 00:02:29.587750 containerd[1868]: time="2026-01-17T00:02:29.587396154Z" level=info msg="StopPodSandbox for \"ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3\" returns successfully" Jan 17 00:02:29.587892 containerd[1868]: time="2026-01-17T00:02:29.587854034Z" level=info msg="RemovePodSandbox for \"ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3\"" Jan 17 00:02:29.587892 containerd[1868]: time="2026-01-17T00:02:29.587886554Z" level=info msg="Forcibly stopping sandbox \"ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3\"" Jan 17 00:02:29.649066 containerd[1868]: 2026-01-17 00:02:29.618 [WARNING][5715] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--pvlhk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"fcd9ca75-edcd-4265-90e0-090e79b4eb07", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 1, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e1db9b2d97", ContainerID:"caea7a85ecc0de03c12e1652bb4ea9f819a3aa0397d96d4969624bbf91debc86", Pod:"coredns-668d6bf9bc-pvlhk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliccdea631770", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:29.649066 containerd[1868]: 2026-01-17 00:02:29.618 [INFO][5715] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" Jan 17 00:02:29.649066 containerd[1868]: 2026-01-17 00:02:29.618 [INFO][5715] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" iface="eth0" netns="" Jan 17 00:02:29.649066 containerd[1868]: 2026-01-17 00:02:29.618 [INFO][5715] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" Jan 17 00:02:29.649066 containerd[1868]: 2026-01-17 00:02:29.618 [INFO][5715] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" Jan 17 00:02:29.649066 containerd[1868]: 2026-01-17 00:02:29.636 [INFO][5722] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" HandleID="k8s-pod-network.ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--pvlhk-eth0" Jan 17 00:02:29.649066 containerd[1868]: 2026-01-17 00:02:29.636 [INFO][5722] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:29.649066 containerd[1868]: 2026-01-17 00:02:29.636 [INFO][5722] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:29.649066 containerd[1868]: 2026-01-17 00:02:29.644 [WARNING][5722] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" HandleID="k8s-pod-network.ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--pvlhk-eth0" Jan 17 00:02:29.649066 containerd[1868]: 2026-01-17 00:02:29.645 [INFO][5722] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" HandleID="k8s-pod-network.ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--pvlhk-eth0" Jan 17 00:02:29.649066 containerd[1868]: 2026-01-17 00:02:29.646 [INFO][5722] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:29.649066 containerd[1868]: 2026-01-17 00:02:29.647 [INFO][5715] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3" Jan 17 00:02:29.649491 containerd[1868]: time="2026-01-17T00:02:29.649110493Z" level=info msg="TearDown network for sandbox \"ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3\" successfully" Jan 17 00:02:29.663441 containerd[1868]: time="2026-01-17T00:02:29.663368447Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:02:29.663585 containerd[1868]: time="2026-01-17T00:02:29.663453487Z" level=info msg="RemovePodSandbox \"ccb63068e4ded495def69e45364ea2420783fa48193517245b659bbcc64f79d3\" returns successfully" Jan 17 00:02:29.664044 containerd[1868]: time="2026-01-17T00:02:29.664023207Z" level=info msg="StopPodSandbox for \"00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e\"" Jan 17 00:02:29.741488 containerd[1868]: 2026-01-17 00:02:29.696 [WARNING][5736] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e1db9b2d97-k8s-calico--kube--controllers--854949db7b--nkqdw-eth0", GenerateName:"calico-kube-controllers-854949db7b-", Namespace:"calico-system", SelfLink:"", UID:"d6662ed8-4409-4f39-bb3b-ba711a87545b", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 1, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"854949db7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e1db9b2d97", ContainerID:"bcb100d18e58fa9e87165a3ba7ab50083c5a92b862cb5e976198f7f809d101e8", Pod:"calico-kube-controllers-854949db7b-nkqdw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2ae9019d675", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:29.741488 containerd[1868]: 2026-01-17 00:02:29.698 [INFO][5736] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" Jan 17 00:02:29.741488 containerd[1868]: 2026-01-17 00:02:29.698 [INFO][5736] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" iface="eth0" netns="" Jan 17 00:02:29.741488 containerd[1868]: 2026-01-17 00:02:29.698 [INFO][5736] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" Jan 17 00:02:29.741488 containerd[1868]: 2026-01-17 00:02:29.698 [INFO][5736] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" Jan 17 00:02:29.741488 containerd[1868]: 2026-01-17 00:02:29.728 [INFO][5744] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" HandleID="k8s-pod-network.00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--kube--controllers--854949db7b--nkqdw-eth0" Jan 17 00:02:29.741488 containerd[1868]: 2026-01-17 00:02:29.728 [INFO][5744] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:29.741488 containerd[1868]: 2026-01-17 00:02:29.728 [INFO][5744] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:29.741488 containerd[1868]: 2026-01-17 00:02:29.736 [WARNING][5744] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" HandleID="k8s-pod-network.00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--kube--controllers--854949db7b--nkqdw-eth0" Jan 17 00:02:29.741488 containerd[1868]: 2026-01-17 00:02:29.736 [INFO][5744] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" HandleID="k8s-pod-network.00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--kube--controllers--854949db7b--nkqdw-eth0" Jan 17 00:02:29.741488 containerd[1868]: 2026-01-17 00:02:29.738 [INFO][5744] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:29.741488 containerd[1868]: 2026-01-17 00:02:29.739 [INFO][5736] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" Jan 17 00:02:29.741488 containerd[1868]: time="2026-01-17T00:02:29.741365740Z" level=info msg="TearDown network for sandbox \"00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e\" successfully" Jan 17 00:02:29.741488 containerd[1868]: time="2026-01-17T00:02:29.741394020Z" level=info msg="StopPodSandbox for \"00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e\" returns successfully" Jan 17 00:02:29.741983 containerd[1868]: time="2026-01-17T00:02:29.741803580Z" level=info msg="RemovePodSandbox for \"00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e\"" Jan 17 00:02:29.741983 containerd[1868]: time="2026-01-17T00:02:29.741830540Z" level=info msg="Forcibly stopping sandbox \"00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e\"" Jan 17 00:02:29.806146 containerd[1868]: 2026-01-17 00:02:29.773 [WARNING][5758] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e1db9b2d97-k8s-calico--kube--controllers--854949db7b--nkqdw-eth0", GenerateName:"calico-kube-controllers-854949db7b-", Namespace:"calico-system", SelfLink:"", UID:"d6662ed8-4409-4f39-bb3b-ba711a87545b", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 1, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"854949db7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e1db9b2d97", ContainerID:"bcb100d18e58fa9e87165a3ba7ab50083c5a92b862cb5e976198f7f809d101e8", Pod:"calico-kube-controllers-854949db7b-nkqdw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2ae9019d675", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:29.806146 containerd[1868]: 2026-01-17 00:02:29.773 [INFO][5758] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" Jan 17 00:02:29.806146 containerd[1868]: 2026-01-17 00:02:29.773 [INFO][5758] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" iface="eth0" netns="" Jan 17 00:02:29.806146 containerd[1868]: 2026-01-17 00:02:29.773 [INFO][5758] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" Jan 17 00:02:29.806146 containerd[1868]: 2026-01-17 00:02:29.773 [INFO][5758] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" Jan 17 00:02:29.806146 containerd[1868]: 2026-01-17 00:02:29.791 [INFO][5765] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" HandleID="k8s-pod-network.00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--kube--controllers--854949db7b--nkqdw-eth0" Jan 17 00:02:29.806146 containerd[1868]: 2026-01-17 00:02:29.792 [INFO][5765] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:29.806146 containerd[1868]: 2026-01-17 00:02:29.792 [INFO][5765] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:29.806146 containerd[1868]: 2026-01-17 00:02:29.801 [WARNING][5765] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" HandleID="k8s-pod-network.00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--kube--controllers--854949db7b--nkqdw-eth0" Jan 17 00:02:29.806146 containerd[1868]: 2026-01-17 00:02:29.801 [INFO][5765] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" HandleID="k8s-pod-network.00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--kube--controllers--854949db7b--nkqdw-eth0" Jan 17 00:02:29.806146 containerd[1868]: 2026-01-17 00:02:29.803 [INFO][5765] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:29.806146 containerd[1868]: 2026-01-17 00:02:29.804 [INFO][5758] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e" Jan 17 00:02:29.806573 containerd[1868]: time="2026-01-17T00:02:29.806195757Z" level=info msg="TearDown network for sandbox \"00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e\" successfully" Jan 17 00:02:29.816206 containerd[1868]: time="2026-01-17T00:02:29.816162753Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:02:29.816281 containerd[1868]: time="2026-01-17T00:02:29.816219073Z" level=info msg="RemovePodSandbox \"00c2e3003b0a75ee88af0701e3f03511b0211627e68de18a9917d3f6fd78237e\" returns successfully" Jan 17 00:02:29.816817 containerd[1868]: time="2026-01-17T00:02:29.816573153Z" level=info msg="StopPodSandbox for \"a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d\"" Jan 17 00:02:29.878152 containerd[1868]: 2026-01-17 00:02:29.848 [WARNING][5779] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e1db9b2d97-k8s-goldmane--666569f655--gjjfd-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7edf5fdf-55d5-4ab4-bb24-67c10b2d9654", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 1, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e1db9b2d97", ContainerID:"5bc2f729b56208847499a886795fc8bf1ba6262188d2fa3e45692a8fd1533f9e", Pod:"goldmane-666569f655-gjjfd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.47.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali72035ebe7de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:29.878152 containerd[1868]: 2026-01-17 00:02:29.848 [INFO][5779] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" Jan 17 00:02:29.878152 containerd[1868]: 2026-01-17 00:02:29.849 [INFO][5779] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" iface="eth0" netns="" Jan 17 00:02:29.878152 containerd[1868]: 2026-01-17 00:02:29.849 [INFO][5779] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" Jan 17 00:02:29.878152 containerd[1868]: 2026-01-17 00:02:29.849 [INFO][5779] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" Jan 17 00:02:29.878152 containerd[1868]: 2026-01-17 00:02:29.865 [INFO][5786] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" HandleID="k8s-pod-network.a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-goldmane--666569f655--gjjfd-eth0" Jan 17 00:02:29.878152 containerd[1868]: 2026-01-17 00:02:29.865 [INFO][5786] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:29.878152 containerd[1868]: 2026-01-17 00:02:29.865 [INFO][5786] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:29.878152 containerd[1868]: 2026-01-17 00:02:29.873 [WARNING][5786] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" HandleID="k8s-pod-network.a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-goldmane--666569f655--gjjfd-eth0" Jan 17 00:02:29.878152 containerd[1868]: 2026-01-17 00:02:29.873 [INFO][5786] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" HandleID="k8s-pod-network.a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-goldmane--666569f655--gjjfd-eth0" Jan 17 00:02:29.878152 containerd[1868]: 2026-01-17 00:02:29.874 [INFO][5786] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:29.878152 containerd[1868]: 2026-01-17 00:02:29.876 [INFO][5779] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" Jan 17 00:02:29.878844 containerd[1868]: time="2026-01-17T00:02:29.878560011Z" level=info msg="TearDown network for sandbox \"a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d\" successfully" Jan 17 00:02:29.878844 containerd[1868]: time="2026-01-17T00:02:29.878596531Z" level=info msg="StopPodSandbox for \"a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d\" returns successfully" Jan 17 00:02:29.879031 containerd[1868]: time="2026-01-17T00:02:29.879003411Z" level=info msg="RemovePodSandbox for \"a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d\"" Jan 17 00:02:29.879093 containerd[1868]: time="2026-01-17T00:02:29.879037451Z" level=info msg="Forcibly stopping sandbox \"a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d\"" Jan 17 00:02:29.939879 containerd[1868]: 2026-01-17 00:02:29.910 [WARNING][5800] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e1db9b2d97-k8s-goldmane--666569f655--gjjfd-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7edf5fdf-55d5-4ab4-bb24-67c10b2d9654", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 1, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e1db9b2d97", ContainerID:"5bc2f729b56208847499a886795fc8bf1ba6262188d2fa3e45692a8fd1533f9e", Pod:"goldmane-666569f655-gjjfd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.47.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali72035ebe7de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:29.939879 containerd[1868]: 2026-01-17 00:02:29.910 [INFO][5800] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" Jan 17 00:02:29.939879 containerd[1868]: 2026-01-17 00:02:29.910 [INFO][5800] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" iface="eth0" netns="" Jan 17 00:02:29.939879 containerd[1868]: 2026-01-17 00:02:29.910 [INFO][5800] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" Jan 17 00:02:29.939879 containerd[1868]: 2026-01-17 00:02:29.910 [INFO][5800] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" Jan 17 00:02:29.939879 containerd[1868]: 2026-01-17 00:02:29.927 [INFO][5807] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" HandleID="k8s-pod-network.a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-goldmane--666569f655--gjjfd-eth0" Jan 17 00:02:29.939879 containerd[1868]: 2026-01-17 00:02:29.927 [INFO][5807] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:29.939879 containerd[1868]: 2026-01-17 00:02:29.927 [INFO][5807] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:29.939879 containerd[1868]: 2026-01-17 00:02:29.935 [WARNING][5807] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" HandleID="k8s-pod-network.a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-goldmane--666569f655--gjjfd-eth0" Jan 17 00:02:29.939879 containerd[1868]: 2026-01-17 00:02:29.935 [INFO][5807] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" HandleID="k8s-pod-network.a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-goldmane--666569f655--gjjfd-eth0" Jan 17 00:02:29.939879 containerd[1868]: 2026-01-17 00:02:29.936 [INFO][5807] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:29.939879 containerd[1868]: 2026-01-17 00:02:29.938 [INFO][5800] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d" Jan 17 00:02:29.940337 containerd[1868]: time="2026-01-17T00:02:29.939916949Z" level=info msg="TearDown network for sandbox \"a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d\" successfully" Jan 17 00:02:29.949382 containerd[1868]: time="2026-01-17T00:02:29.949343786Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:02:29.949464 containerd[1868]: time="2026-01-17T00:02:29.949400186Z" level=info msg="RemovePodSandbox \"a474d318113e2e6824daef4d103dc1bf5c067d3e61f4fe5243ead096b949779d\" returns successfully" Jan 17 00:02:29.950055 containerd[1868]: time="2026-01-17T00:02:29.949806626Z" level=info msg="StopPodSandbox for \"e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476\"" Jan 17 00:02:30.010014 containerd[1868]: 2026-01-17 00:02:29.980 [WARNING][5821] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e1db9b2d97-k8s-csi--node--driver--6zk6p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e5fabfab-a45e-49bd-b3b5-28097628ac44", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 1, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e1db9b2d97", ContainerID:"5bd84d1b1ec136a6ebf3d88a90460edb3527ee46d367e58333caca8dba80a8a1", Pod:"csi-node-driver-6zk6p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali48f66b60171", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:30.010014 containerd[1868]: 2026-01-17 00:02:29.980 [INFO][5821] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" Jan 17 00:02:30.010014 containerd[1868]: 2026-01-17 00:02:29.980 [INFO][5821] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" iface="eth0" netns="" Jan 17 00:02:30.010014 containerd[1868]: 2026-01-17 00:02:29.980 [INFO][5821] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" Jan 17 00:02:30.010014 containerd[1868]: 2026-01-17 00:02:29.980 [INFO][5821] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" Jan 17 00:02:30.010014 containerd[1868]: 2026-01-17 00:02:29.997 [INFO][5828] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" HandleID="k8s-pod-network.e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-csi--node--driver--6zk6p-eth0" Jan 17 00:02:30.010014 containerd[1868]: 2026-01-17 00:02:29.997 [INFO][5828] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:30.010014 containerd[1868]: 2026-01-17 00:02:29.997 [INFO][5828] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:30.010014 containerd[1868]: 2026-01-17 00:02:30.005 [WARNING][5828] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" HandleID="k8s-pod-network.e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-csi--node--driver--6zk6p-eth0" Jan 17 00:02:30.010014 containerd[1868]: 2026-01-17 00:02:30.006 [INFO][5828] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" HandleID="k8s-pod-network.e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-csi--node--driver--6zk6p-eth0" Jan 17 00:02:30.010014 containerd[1868]: 2026-01-17 00:02:30.007 [INFO][5828] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:30.010014 containerd[1868]: 2026-01-17 00:02:30.008 [INFO][5821] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" Jan 17 00:02:30.010624 containerd[1868]: time="2026-01-17T00:02:30.010502204Z" level=info msg="TearDown network for sandbox \"e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476\" successfully" Jan 17 00:02:30.010624 containerd[1868]: time="2026-01-17T00:02:30.010530604Z" level=info msg="StopPodSandbox for \"e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476\" returns successfully" Jan 17 00:02:30.010975 containerd[1868]: time="2026-01-17T00:02:30.010952124Z" level=info msg="RemovePodSandbox for \"e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476\"" Jan 17 00:02:30.011320 containerd[1868]: time="2026-01-17T00:02:30.011100604Z" level=info msg="Forcibly stopping sandbox \"e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476\"" Jan 17 00:02:30.078238 containerd[1868]: 2026-01-17 00:02:30.047 [WARNING][5842] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e1db9b2d97-k8s-csi--node--driver--6zk6p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e5fabfab-a45e-49bd-b3b5-28097628ac44", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 1, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e1db9b2d97", ContainerID:"5bd84d1b1ec136a6ebf3d88a90460edb3527ee46d367e58333caca8dba80a8a1", Pod:"csi-node-driver-6zk6p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali48f66b60171", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:30.078238 containerd[1868]: 2026-01-17 00:02:30.048 [INFO][5842] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" Jan 17 00:02:30.078238 containerd[1868]: 2026-01-17 00:02:30.048 [INFO][5842] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" iface="eth0" netns="" Jan 17 00:02:30.078238 containerd[1868]: 2026-01-17 00:02:30.048 [INFO][5842] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" Jan 17 00:02:30.078238 containerd[1868]: 2026-01-17 00:02:30.048 [INFO][5842] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" Jan 17 00:02:30.078238 containerd[1868]: 2026-01-17 00:02:30.065 [INFO][5849] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" HandleID="k8s-pod-network.e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-csi--node--driver--6zk6p-eth0" Jan 17 00:02:30.078238 containerd[1868]: 2026-01-17 00:02:30.065 [INFO][5849] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:30.078238 containerd[1868]: 2026-01-17 00:02:30.065 [INFO][5849] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:30.078238 containerd[1868]: 2026-01-17 00:02:30.073 [WARNING][5849] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" HandleID="k8s-pod-network.e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-csi--node--driver--6zk6p-eth0" Jan 17 00:02:30.078238 containerd[1868]: 2026-01-17 00:02:30.073 [INFO][5849] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" HandleID="k8s-pod-network.e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-csi--node--driver--6zk6p-eth0" Jan 17 00:02:30.078238 containerd[1868]: 2026-01-17 00:02:30.074 [INFO][5849] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:30.078238 containerd[1868]: 2026-01-17 00:02:30.076 [INFO][5842] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476" Jan 17 00:02:30.078238 containerd[1868]: time="2026-01-17T00:02:30.077791061Z" level=info msg="TearDown network for sandbox \"e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476\" successfully" Jan 17 00:02:30.089257 containerd[1868]: time="2026-01-17T00:02:30.089225977Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:02:30.089487 containerd[1868]: time="2026-01-17T00:02:30.089392136Z" level=info msg="RemovePodSandbox \"e4e8b789511d8eea1bfe475875fcd4a4f164cf24d3529ebb35042101c731d476\" returns successfully" Jan 17 00:02:30.090007 containerd[1868]: time="2026-01-17T00:02:30.089981936Z" level=info msg="StopPodSandbox for \"d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a\"" Jan 17 00:02:30.154697 containerd[1868]: 2026-01-17 00:02:30.123 [WARNING][5863] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--lpdqp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"26750179-e403-4a5c-a534-2a8e795f3838", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 1, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e1db9b2d97", ContainerID:"8d99eefa630ef0820a86ab6d835196a5a9981256b1f3ff5f87a5ea5a0c8e4907", Pod:"coredns-668d6bf9bc-lpdqp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2f0703ebe1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:30.154697 containerd[1868]: 2026-01-17 00:02:30.124 [INFO][5863] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" Jan 17 00:02:30.154697 containerd[1868]: 2026-01-17 00:02:30.124 [INFO][5863] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" iface="eth0" netns="" Jan 17 00:02:30.154697 containerd[1868]: 2026-01-17 00:02:30.124 [INFO][5863] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" Jan 17 00:02:30.154697 containerd[1868]: 2026-01-17 00:02:30.124 [INFO][5863] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" Jan 17 00:02:30.154697 containerd[1868]: 2026-01-17 00:02:30.141 [INFO][5870] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" HandleID="k8s-pod-network.d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--lpdqp-eth0" Jan 17 00:02:30.154697 containerd[1868]: 2026-01-17 00:02:30.141 [INFO][5870] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:30.154697 containerd[1868]: 2026-01-17 00:02:30.141 [INFO][5870] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:30.154697 containerd[1868]: 2026-01-17 00:02:30.150 [WARNING][5870] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" HandleID="k8s-pod-network.d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--lpdqp-eth0" Jan 17 00:02:30.154697 containerd[1868]: 2026-01-17 00:02:30.150 [INFO][5870] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" HandleID="k8s-pod-network.d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--lpdqp-eth0" Jan 17 00:02:30.154697 containerd[1868]: 2026-01-17 00:02:30.151 [INFO][5870] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:30.154697 containerd[1868]: 2026-01-17 00:02:30.153 [INFO][5863] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" Jan 17 00:02:30.155786 containerd[1868]: time="2026-01-17T00:02:30.154743593Z" level=info msg="TearDown network for sandbox \"d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a\" successfully" Jan 17 00:02:30.155786 containerd[1868]: time="2026-01-17T00:02:30.154767233Z" level=info msg="StopPodSandbox for \"d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a\" returns successfully" Jan 17 00:02:30.155786 containerd[1868]: time="2026-01-17T00:02:30.155398433Z" level=info msg="RemovePodSandbox for \"d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a\"" Jan 17 00:02:30.155786 containerd[1868]: time="2026-01-17T00:02:30.155427073Z" level=info msg="Forcibly stopping sandbox \"d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a\"" Jan 17 00:02:30.220314 containerd[1868]: 2026-01-17 00:02:30.191 [WARNING][5884] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--lpdqp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"26750179-e403-4a5c-a534-2a8e795f3838", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 1, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e1db9b2d97", ContainerID:"8d99eefa630ef0820a86ab6d835196a5a9981256b1f3ff5f87a5ea5a0c8e4907", Pod:"coredns-668d6bf9bc-lpdqp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif2f0703ebe1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:30.220314 containerd[1868]: 2026-01-17 00:02:30.191 [INFO][5884] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" Jan 17 00:02:30.220314 containerd[1868]: 2026-01-17 00:02:30.191 [INFO][5884] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" iface="eth0" netns="" Jan 17 00:02:30.220314 containerd[1868]: 2026-01-17 00:02:30.191 [INFO][5884] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" Jan 17 00:02:30.220314 containerd[1868]: 2026-01-17 00:02:30.191 [INFO][5884] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" Jan 17 00:02:30.220314 containerd[1868]: 2026-01-17 00:02:30.208 [INFO][5892] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" HandleID="k8s-pod-network.d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--lpdqp-eth0" Jan 17 00:02:30.220314 containerd[1868]: 2026-01-17 00:02:30.208 [INFO][5892] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:30.220314 containerd[1868]: 2026-01-17 00:02:30.208 [INFO][5892] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:30.220314 containerd[1868]: 2026-01-17 00:02:30.216 [WARNING][5892] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" HandleID="k8s-pod-network.d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--lpdqp-eth0" Jan 17 00:02:30.220314 containerd[1868]: 2026-01-17 00:02:30.216 [INFO][5892] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" HandleID="k8s-pod-network.d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-coredns--668d6bf9bc--lpdqp-eth0" Jan 17 00:02:30.220314 containerd[1868]: 2026-01-17 00:02:30.217 [INFO][5892] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:30.220314 containerd[1868]: 2026-01-17 00:02:30.218 [INFO][5884] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a" Jan 17 00:02:30.220813 containerd[1868]: time="2026-01-17T00:02:30.220341010Z" level=info msg="TearDown network for sandbox \"d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a\" successfully" Jan 17 00:02:30.229189 containerd[1868]: time="2026-01-17T00:02:30.229141007Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:02:30.229306 containerd[1868]: time="2026-01-17T00:02:30.229211047Z" level=info msg="RemovePodSandbox \"d841af62db6a0ee495aab09e0c6da7e160eede11861685859e0a27113ee9f53a\" returns successfully" Jan 17 00:02:30.229998 containerd[1868]: time="2026-01-17T00:02:30.229750127Z" level=info msg="StopPodSandbox for \"fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0\"" Jan 17 00:02:30.297645 containerd[1868]: 2026-01-17 00:02:30.267 [WARNING][5906] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--5gx8p-eth0", GenerateName:"calico-apiserver-7cb7f6dddc-", Namespace:"calico-apiserver", SelfLink:"", UID:"7e6f1f60-5b4d-4f6a-92be-ef48b02574bd", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 1, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cb7f6dddc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e1db9b2d97", ContainerID:"d83e978e76e7ab733b11d495e0dd640b5308f626ba666830f03f1c45a34d852e", Pod:"calico-apiserver-7cb7f6dddc-5gx8p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali489b60b1142", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:30.297645 containerd[1868]: 2026-01-17 00:02:30.267 [INFO][5906] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" Jan 17 00:02:30.297645 containerd[1868]: 2026-01-17 00:02:30.267 [INFO][5906] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" iface="eth0" netns="" Jan 17 00:02:30.297645 containerd[1868]: 2026-01-17 00:02:30.267 [INFO][5906] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" Jan 17 00:02:30.297645 containerd[1868]: 2026-01-17 00:02:30.267 [INFO][5906] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" Jan 17 00:02:30.297645 containerd[1868]: 2026-01-17 00:02:30.285 [INFO][5913] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" HandleID="k8s-pod-network.fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--5gx8p-eth0" Jan 17 00:02:30.297645 containerd[1868]: 2026-01-17 00:02:30.285 [INFO][5913] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:30.297645 containerd[1868]: 2026-01-17 00:02:30.285 [INFO][5913] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:30.297645 containerd[1868]: 2026-01-17 00:02:30.293 [WARNING][5913] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" HandleID="k8s-pod-network.fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--5gx8p-eth0" Jan 17 00:02:30.297645 containerd[1868]: 2026-01-17 00:02:30.293 [INFO][5913] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" HandleID="k8s-pod-network.fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--5gx8p-eth0" Jan 17 00:02:30.297645 containerd[1868]: 2026-01-17 00:02:30.294 [INFO][5913] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:30.297645 containerd[1868]: 2026-01-17 00:02:30.296 [INFO][5906] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" Jan 17 00:02:30.298509 containerd[1868]: time="2026-01-17T00:02:30.297682263Z" level=info msg="TearDown network for sandbox \"fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0\" successfully" Jan 17 00:02:30.298509 containerd[1868]: time="2026-01-17T00:02:30.297707183Z" level=info msg="StopPodSandbox for \"fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0\" returns successfully" Jan 17 00:02:30.298581 containerd[1868]: time="2026-01-17T00:02:30.298505782Z" level=info msg="RemovePodSandbox for \"fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0\"" Jan 17 00:02:30.298581 containerd[1868]: time="2026-01-17T00:02:30.298530862Z" level=info msg="Forcibly stopping sandbox \"fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0\"" Jan 17 00:02:30.362629 containerd[1868]: 2026-01-17 00:02:30.326 [WARNING][5927] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--5gx8p-eth0", GenerateName:"calico-apiserver-7cb7f6dddc-", Namespace:"calico-apiserver", SelfLink:"", UID:"7e6f1f60-5b4d-4f6a-92be-ef48b02574bd", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 1, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cb7f6dddc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e1db9b2d97", ContainerID:"d83e978e76e7ab733b11d495e0dd640b5308f626ba666830f03f1c45a34d852e", Pod:"calico-apiserver-7cb7f6dddc-5gx8p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali489b60b1142", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:30.362629 containerd[1868]: 2026-01-17 00:02:30.327 [INFO][5927] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" Jan 17 00:02:30.362629 containerd[1868]: 2026-01-17 00:02:30.327 [INFO][5927] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" iface="eth0" netns="" Jan 17 00:02:30.362629 containerd[1868]: 2026-01-17 00:02:30.327 [INFO][5927] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" Jan 17 00:02:30.362629 containerd[1868]: 2026-01-17 00:02:30.327 [INFO][5927] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" Jan 17 00:02:30.362629 containerd[1868]: 2026-01-17 00:02:30.348 [INFO][5934] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" HandleID="k8s-pod-network.fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--5gx8p-eth0" Jan 17 00:02:30.362629 containerd[1868]: 2026-01-17 00:02:30.348 [INFO][5934] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:30.362629 containerd[1868]: 2026-01-17 00:02:30.348 [INFO][5934] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:30.362629 containerd[1868]: 2026-01-17 00:02:30.356 [WARNING][5934] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" HandleID="k8s-pod-network.fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--5gx8p-eth0" Jan 17 00:02:30.362629 containerd[1868]: 2026-01-17 00:02:30.356 [INFO][5934] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" HandleID="k8s-pod-network.fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--5gx8p-eth0" Jan 17 00:02:30.362629 containerd[1868]: 2026-01-17 00:02:30.358 [INFO][5934] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:30.362629 containerd[1868]: 2026-01-17 00:02:30.359 [INFO][5927] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0" Jan 17 00:02:30.362629 containerd[1868]: time="2026-01-17T00:02:30.361417880Z" level=info msg="TearDown network for sandbox \"fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0\" successfully" Jan 17 00:02:30.372877 containerd[1868]: time="2026-01-17T00:02:30.372846436Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:02:30.373017 containerd[1868]: time="2026-01-17T00:02:30.373002156Z" level=info msg="RemovePodSandbox \"fbab15225a0a816453f24650864a9db8a2924e48bec18861831c5d87480176b0\" returns successfully" Jan 17 00:02:30.373607 containerd[1868]: time="2026-01-17T00:02:30.373582956Z" level=info msg="StopPodSandbox for \"3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a\"" Jan 17 00:02:30.431452 containerd[1868]: 2026-01-17 00:02:30.402 [WARNING][5948] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--pk2n6-eth0", GenerateName:"calico-apiserver-7cb7f6dddc-", Namespace:"calico-apiserver", SelfLink:"", UID:"6139bf28-5324-4c65-a1a9-809ea0e0b5cf", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 1, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cb7f6dddc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e1db9b2d97", ContainerID:"d68181d151bc3fadeb0ad45bb53ab39f85bcaebe972caaae82cb11c625e0e731", Pod:"calico-apiserver-7cb7f6dddc-pk2n6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f7d545ae88", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:30.431452 containerd[1868]: 2026-01-17 00:02:30.402 [INFO][5948] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" Jan 17 00:02:30.431452 containerd[1868]: 2026-01-17 00:02:30.402 [INFO][5948] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" iface="eth0" netns="" Jan 17 00:02:30.431452 containerd[1868]: 2026-01-17 00:02:30.402 [INFO][5948] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" Jan 17 00:02:30.431452 containerd[1868]: 2026-01-17 00:02:30.402 [INFO][5948] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" Jan 17 00:02:30.431452 containerd[1868]: 2026-01-17 00:02:30.419 [INFO][5955] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" HandleID="k8s-pod-network.3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--pk2n6-eth0" Jan 17 00:02:30.431452 containerd[1868]: 2026-01-17 00:02:30.419 [INFO][5955] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:30.431452 containerd[1868]: 2026-01-17 00:02:30.419 [INFO][5955] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:30.431452 containerd[1868]: 2026-01-17 00:02:30.427 [WARNING][5955] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" HandleID="k8s-pod-network.3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--pk2n6-eth0" Jan 17 00:02:30.431452 containerd[1868]: 2026-01-17 00:02:30.427 [INFO][5955] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" HandleID="k8s-pod-network.3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--pk2n6-eth0" Jan 17 00:02:30.431452 containerd[1868]: 2026-01-17 00:02:30.428 [INFO][5955] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:30.431452 containerd[1868]: 2026-01-17 00:02:30.429 [INFO][5948] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" Jan 17 00:02:30.432086 containerd[1868]: time="2026-01-17T00:02:30.431491895Z" level=info msg="TearDown network for sandbox \"3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a\" successfully" Jan 17 00:02:30.432086 containerd[1868]: time="2026-01-17T00:02:30.431515455Z" level=info msg="StopPodSandbox for \"3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a\" returns successfully" Jan 17 00:02:30.432086 containerd[1868]: time="2026-01-17T00:02:30.431928655Z" level=info msg="RemovePodSandbox for \"3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a\"" Jan 17 00:02:30.432086 containerd[1868]: time="2026-01-17T00:02:30.431953375Z" level=info msg="Forcibly stopping sandbox \"3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a\"" Jan 17 00:02:30.491540 containerd[1868]: 2026-01-17 00:02:30.462 [WARNING][5969] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--pk2n6-eth0", GenerateName:"calico-apiserver-7cb7f6dddc-", Namespace:"calico-apiserver", SelfLink:"", UID:"6139bf28-5324-4c65-a1a9-809ea0e0b5cf", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 1, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cb7f6dddc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e1db9b2d97", ContainerID:"d68181d151bc3fadeb0ad45bb53ab39f85bcaebe972caaae82cb11c625e0e731", Pod:"calico-apiserver-7cb7f6dddc-pk2n6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f7d545ae88", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:30.491540 containerd[1868]: 2026-01-17 00:02:30.462 [INFO][5969] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" Jan 17 00:02:30.491540 containerd[1868]: 2026-01-17 00:02:30.462 [INFO][5969] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" iface="eth0" netns="" Jan 17 00:02:30.491540 containerd[1868]: 2026-01-17 00:02:30.463 [INFO][5969] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" Jan 17 00:02:30.491540 containerd[1868]: 2026-01-17 00:02:30.463 [INFO][5969] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" Jan 17 00:02:30.491540 containerd[1868]: 2026-01-17 00:02:30.479 [INFO][5977] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" HandleID="k8s-pod-network.3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--pk2n6-eth0" Jan 17 00:02:30.491540 containerd[1868]: 2026-01-17 00:02:30.479 [INFO][5977] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:30.491540 containerd[1868]: 2026-01-17 00:02:30.479 [INFO][5977] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:30.491540 containerd[1868]: 2026-01-17 00:02:30.487 [WARNING][5977] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" HandleID="k8s-pod-network.3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--pk2n6-eth0" Jan 17 00:02:30.491540 containerd[1868]: 2026-01-17 00:02:30.487 [INFO][5977] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" HandleID="k8s-pod-network.3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-calico--apiserver--7cb7f6dddc--pk2n6-eth0" Jan 17 00:02:30.491540 containerd[1868]: 2026-01-17 00:02:30.488 [INFO][5977] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:30.491540 containerd[1868]: 2026-01-17 00:02:30.490 [INFO][5969] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a" Jan 17 00:02:30.491540 containerd[1868]: time="2026-01-17T00:02:30.491530154Z" level=info msg="TearDown network for sandbox \"3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a\" successfully" Jan 17 00:02:30.500613 containerd[1868]: time="2026-01-17T00:02:30.500575311Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:02:30.500966 containerd[1868]: time="2026-01-17T00:02:30.500634951Z" level=info msg="RemovePodSandbox \"3590417b214fb7f116d196046f5b1a150245bb5e112ec8fa296c95edbeb8ce8a\" returns successfully" Jan 17 00:02:30.501410 containerd[1868]: time="2026-01-17T00:02:30.501137951Z" level=info msg="StopPodSandbox for \"681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c\"" Jan 17 00:02:30.557594 containerd[1868]: 2026-01-17 00:02:30.529 [WARNING][5991] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-whisker--5c86fff86c--nvh7r-eth0" Jan 17 00:02:30.557594 containerd[1868]: 2026-01-17 00:02:30.529 [INFO][5991] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" Jan 17 00:02:30.557594 containerd[1868]: 2026-01-17 00:02:30.530 [INFO][5991] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" iface="eth0" netns="" Jan 17 00:02:30.557594 containerd[1868]: 2026-01-17 00:02:30.530 [INFO][5991] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" Jan 17 00:02:30.557594 containerd[1868]: 2026-01-17 00:02:30.530 [INFO][5991] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" Jan 17 00:02:30.557594 containerd[1868]: 2026-01-17 00:02:30.545 [INFO][5998] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" HandleID="k8s-pod-network.681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-whisker--5c86fff86c--nvh7r-eth0" Jan 17 00:02:30.557594 containerd[1868]: 2026-01-17 00:02:30.545 [INFO][5998] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:30.557594 containerd[1868]: 2026-01-17 00:02:30.545 [INFO][5998] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:30.557594 containerd[1868]: 2026-01-17 00:02:30.553 [WARNING][5998] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" HandleID="k8s-pod-network.681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-whisker--5c86fff86c--nvh7r-eth0" Jan 17 00:02:30.557594 containerd[1868]: 2026-01-17 00:02:30.553 [INFO][5998] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" HandleID="k8s-pod-network.681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-whisker--5c86fff86c--nvh7r-eth0" Jan 17 00:02:30.557594 containerd[1868]: 2026-01-17 00:02:30.554 [INFO][5998] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:30.557594 containerd[1868]: 2026-01-17 00:02:30.556 [INFO][5991] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" Jan 17 00:02:30.559016 containerd[1868]: time="2026-01-17T00:02:30.558256130Z" level=info msg="TearDown network for sandbox \"681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c\" successfully" Jan 17 00:02:30.559016 containerd[1868]: time="2026-01-17T00:02:30.558293170Z" level=info msg="StopPodSandbox for \"681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c\" returns successfully" Jan 17 00:02:30.559016 containerd[1868]: time="2026-01-17T00:02:30.558712850Z" level=info msg="RemovePodSandbox for \"681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c\"" Jan 17 00:02:30.559016 containerd[1868]: time="2026-01-17T00:02:30.558738410Z" level=info msg="Forcibly stopping sandbox \"681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c\"" Jan 17 00:02:30.620263 containerd[1868]: 2026-01-17 00:02:30.586 [WARNING][6012] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" WorkloadEndpoint="ci--4081.3.6--n--e1db9b2d97-k8s-whisker--5c86fff86c--nvh7r-eth0" Jan 17 00:02:30.620263 containerd[1868]: 2026-01-17 00:02:30.586 [INFO][6012] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" Jan 17 00:02:30.620263 containerd[1868]: 2026-01-17 00:02:30.586 [INFO][6012] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" iface="eth0" netns="" Jan 17 00:02:30.620263 containerd[1868]: 2026-01-17 00:02:30.587 [INFO][6012] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" Jan 17 00:02:30.620263 containerd[1868]: 2026-01-17 00:02:30.587 [INFO][6012] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" Jan 17 00:02:30.620263 containerd[1868]: 2026-01-17 00:02:30.603 [INFO][6019] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" HandleID="k8s-pod-network.681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-whisker--5c86fff86c--nvh7r-eth0" Jan 17 00:02:30.620263 containerd[1868]: 2026-01-17 00:02:30.603 [INFO][6019] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:30.620263 containerd[1868]: 2026-01-17 00:02:30.603 [INFO][6019] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:30.620263 containerd[1868]: 2026-01-17 00:02:30.614 [WARNING][6019] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" HandleID="k8s-pod-network.681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-whisker--5c86fff86c--nvh7r-eth0" Jan 17 00:02:30.620263 containerd[1868]: 2026-01-17 00:02:30.614 [INFO][6019] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" HandleID="k8s-pod-network.681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" Workload="ci--4081.3.6--n--e1db9b2d97-k8s-whisker--5c86fff86c--nvh7r-eth0" Jan 17 00:02:30.620263 containerd[1868]: 2026-01-17 00:02:30.616 [INFO][6019] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:30.620263 containerd[1868]: 2026-01-17 00:02:30.618 [INFO][6012] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c" Jan 17 00:02:30.620263 containerd[1868]: time="2026-01-17T00:02:30.619605189Z" level=info msg="TearDown network for sandbox \"681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c\" successfully" Jan 17 00:02:30.637018 containerd[1868]: time="2026-01-17T00:02:30.636981662Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:02:30.637166 containerd[1868]: time="2026-01-17T00:02:30.637151582Z" level=info msg="RemovePodSandbox \"681e9dc9d884d913dc57e25cfdfe1b3ac554d38c0700aa80b37e6cfb1dd9fb6c\" returns successfully" Jan 17 00:02:31.504629 containerd[1868]: time="2026-01-17T00:02:31.504396675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:02:31.812414 containerd[1868]: time="2026-01-17T00:02:31.812263526Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:02:31.816492 containerd[1868]: time="2026-01-17T00:02:31.816411125Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:02:31.816570 containerd[1868]: time="2026-01-17T00:02:31.816489004Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:02:31.816667 kubelet[3363]: E0117 00:02:31.816619 3363 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:02:31.816928 kubelet[3363]: E0117 00:02:31.816675 3363 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:02:31.817119 containerd[1868]: time="2026-01-17T00:02:31.817089044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:02:31.817406 kubelet[3363]: E0117 00:02:31.816883 3363 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2h2r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-854949db7b-nkqdw_calico-system(d6662ed8-4409-4f39-bb3b-ba711a87545b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:02:31.818670 kubelet[3363]: E0117 00:02:31.818639 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-854949db7b-nkqdw" podUID="d6662ed8-4409-4f39-bb3b-ba711a87545b" Jan 17 00:02:32.082653 containerd[1868]: time="2026-01-17T00:02:32.082515710Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:02:32.085886 containerd[1868]: time="2026-01-17T00:02:32.085832069Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:02:32.085959 containerd[1868]: time="2026-01-17T00:02:32.085926789Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:02:32.086269 kubelet[3363]: E0117 00:02:32.086070 3363 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:02:32.086269 kubelet[3363]: E0117 00:02:32.086117 3363 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:02:32.086394 kubelet[3363]: E0117 00:02:32.086282 3363 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2mg8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cb7f6dddc-pk2n6_calico-apiserver(6139bf28-5324-4c65-a1a9-809ea0e0b5cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:02:32.086931 containerd[1868]: time="2026-01-17T00:02:32.086726309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:02:32.088340 kubelet[3363]: E0117 00:02:32.088293 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7f6dddc-pk2n6" podUID="6139bf28-5324-4c65-a1a9-809ea0e0b5cf" Jan 17 00:02:32.335415 containerd[1868]: time="2026-01-17T00:02:32.335271621Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:02:32.343033 containerd[1868]: time="2026-01-17T00:02:32.342936618Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:02:32.343033 containerd[1868]: time="2026-01-17T00:02:32.343001898Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:02:32.343466 kubelet[3363]: E0117 00:02:32.343266 3363 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:02:32.343466 kubelet[3363]: E0117 00:02:32.343318 3363 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:02:32.343466 kubelet[3363]: E0117 00:02:32.343426 3363 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sdr7c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cb7f6dddc-5gx8p_calico-apiserver(7e6f1f60-5b4d-4f6a-92be-ef48b02574bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:02:32.344764 kubelet[3363]: E0117 00:02:32.344721 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7f6dddc-5gx8p" podUID="7e6f1f60-5b4d-4f6a-92be-ef48b02574bd" Jan 17 00:02:32.505316 containerd[1868]: time="2026-01-17T00:02:32.504652241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:02:32.821289 containerd[1868]: time="2026-01-17T00:02:32.821237208Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:02:32.826665 containerd[1868]: time="2026-01-17T00:02:32.826630607Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:02:32.826739 containerd[1868]: time="2026-01-17T00:02:32.826722967Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:02:32.826881 kubelet[3363]: E0117 00:02:32.826843 3363 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:02:32.827112 kubelet[3363]: E0117 00:02:32.826890 3363 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:02:32.827112 kubelet[3363]: E0117 00:02:32.826988 3363 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zqshz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6zk6p_calico-system(e5fabfab-a45e-49bd-b3b5-28097628ac44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:02:32.829474 containerd[1868]: time="2026-01-17T00:02:32.829062086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:02:33.247589 containerd[1868]: time="2026-01-17T00:02:33.247334777Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:02:33.250941 containerd[1868]: time="2026-01-17T00:02:33.250837696Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:02:33.250941 containerd[1868]: time="2026-01-17T00:02:33.250905016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:02:33.251077 kubelet[3363]: E0117 00:02:33.251029 3363 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:02:33.251131 kubelet[3363]: E0117 00:02:33.251079 3363 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:02:33.251242 kubelet[3363]: E0117 00:02:33.251197 3363 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zqshz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6zk6p_calico-system(e5fabfab-a45e-49bd-b3b5-28097628ac44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:02:33.252694 kubelet[3363]: E0117 00:02:33.252463 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6zk6p" podUID="e5fabfab-a45e-49bd-b3b5-28097628ac44" Jan 17 00:02:33.506716 containerd[1868]: time="2026-01-17T00:02:33.505836446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:02:33.761154 containerd[1868]: time="2026-01-17T00:02:33.761049075Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:02:33.767426 containerd[1868]: time="2026-01-17T00:02:33.767262313Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:02:33.767426 containerd[1868]: time="2026-01-17T00:02:33.767333553Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:02:33.767559 kubelet[3363]: E0117 00:02:33.767448 3363 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:02:33.767559 kubelet[3363]: E0117 00:02:33.767492 3363 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:02:33.768000 kubelet[3363]: E0117 00:02:33.767618 3363 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ph8gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-gjjfd_calico-system(7edf5fdf-55d5-4ab4-bb24-67c10b2d9654): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:02:33.768765 kubelet[3363]: E0117 00:02:33.768733 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gjjfd" podUID="7edf5fdf-55d5-4ab4-bb24-67c10b2d9654" Jan 17 00:02:35.507036 kubelet[3363]: E0117 00:02:35.506981 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8f9bbb7c5-bvnkf" podUID="a8e344f5-3821-4ec4-a6da-be956667501d" Jan 17 00:02:43.515475 kubelet[3363]: E0117 00:02:43.515431 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-854949db7b-nkqdw" podUID="d6662ed8-4409-4f39-bb3b-ba711a87545b" Jan 17 00:02:45.506132 kubelet[3363]: E0117 00:02:45.503937 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7f6dddc-pk2n6" podUID="6139bf28-5324-4c65-a1a9-809ea0e0b5cf" Jan 17 00:02:46.506757 kubelet[3363]: E0117 00:02:46.506711 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gjjfd" podUID="7edf5fdf-55d5-4ab4-bb24-67c10b2d9654" Jan 17 00:02:47.505471 kubelet[3363]: E0117 00:02:47.505323 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7f6dddc-5gx8p" podUID="7e6f1f60-5b4d-4f6a-92be-ef48b02574bd" Jan 17 00:02:47.505471 kubelet[3363]: E0117 00:02:47.505928 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6zk6p" podUID="e5fabfab-a45e-49bd-b3b5-28097628ac44" Jan 17 00:02:50.509079 containerd[1868]: time="2026-01-17T00:02:50.508964329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:02:50.788941 containerd[1868]: time="2026-01-17T00:02:50.788897945Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:02:50.793377 containerd[1868]: time="2026-01-17T00:02:50.793338664Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:02:50.793377 containerd[1868]: time="2026-01-17T00:02:50.793423344Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:02:50.793591 kubelet[3363]: E0117 00:02:50.793537 3363 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:02:50.793591 kubelet[3363]: E0117 00:02:50.793584 3363 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:02:50.794152 kubelet[3363]: E0117 00:02:50.793684 3363 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2f3b6028e4fe4b7a9477513ac0bfee1b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-skdtr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8f9bbb7c5-bvnkf_calico-system(a8e344f5-3821-4ec4-a6da-be956667501d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:02:50.796578 containerd[1868]: time="2026-01-17T00:02:50.796480943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:02:51.087748 containerd[1868]: time="2026-01-17T00:02:51.087576957Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:02:51.092027 containerd[1868]: time="2026-01-17T00:02:51.091986796Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:02:51.092107 containerd[1868]: time="2026-01-17T00:02:51.092087876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:02:51.092396 kubelet[3363]: E0117 00:02:51.092249 3363 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:02:51.092396 kubelet[3363]: E0117 00:02:51.092297 3363 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:02:51.093619 kubelet[3363]: E0117 00:02:51.092534 3363 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-skdtr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8f9bbb7c5-bvnkf_calico-system(a8e344f5-3821-4ec4-a6da-be956667501d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:02:51.094951 kubelet[3363]: E0117 00:02:51.094899 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8f9bbb7c5-bvnkf" podUID="a8e344f5-3821-4ec4-a6da-be956667501d" Jan 17 00:02:56.505424 containerd[1868]: time="2026-01-17T00:02:56.505382217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:02:56.739556 containerd[1868]: time="2026-01-17T00:02:56.739354247Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:02:56.743956 containerd[1868]: time="2026-01-17T00:02:56.743676366Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:02:56.743956 containerd[1868]: time="2026-01-17T00:02:56.743772486Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:02:56.744072 kubelet[3363]: E0117 00:02:56.743901 3363 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:02:56.744072 kubelet[3363]: E0117 00:02:56.743954 3363 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:02:56.748441 kubelet[3363]: E0117 00:02:56.748375 3363 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2h2r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-854949db7b-nkqdw_calico-system(d6662ed8-4409-4f39-bb3b-ba711a87545b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:02:56.749837 kubelet[3363]: E0117 00:02:56.749521 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-854949db7b-nkqdw" podUID="d6662ed8-4409-4f39-bb3b-ba711a87545b" Jan 17 00:02:59.508233 containerd[1868]: time="2026-01-17T00:02:59.508115291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:02:59.794324 containerd[1868]: time="2026-01-17T00:02:59.794286430Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:02:59.798261 containerd[1868]: time="2026-01-17T00:02:59.798220189Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:02:59.798354 containerd[1868]: time="2026-01-17T00:02:59.798305709Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:02:59.798440 kubelet[3363]: E0117 00:02:59.798394 3363 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:02:59.799332 kubelet[3363]: E0117 00:02:59.798450 3363 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:02:59.799332 kubelet[3363]: E0117 00:02:59.799085 3363 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ph8gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-gjjfd_calico-system(7edf5fdf-55d5-4ab4-bb24-67c10b2d9654): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:02:59.801190 kubelet[3363]: E0117 00:02:59.800258 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gjjfd" podUID="7edf5fdf-55d5-4ab4-bb24-67c10b2d9654" Jan 17 00:02:59.801282 containerd[1868]: time="2026-01-17T00:02:59.800432508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:03:00.071202 containerd[1868]: time="2026-01-17T00:03:00.071073050Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:00.077865 containerd[1868]: time="2026-01-17T00:03:00.076625529Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:03:00.077865 containerd[1868]: time="2026-01-17T00:03:00.076733649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:03:00.077865 containerd[1868]: time="2026-01-17T00:03:00.077374849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:03:00.078039 kubelet[3363]: E0117 00:03:00.076847 3363 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:03:00.078039 kubelet[3363]: E0117 00:03:00.076892 3363 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:03:00.078039 kubelet[3363]: E0117 00:03:00.077079 3363 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zqshz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6zk6p_calico-system(e5fabfab-a45e-49bd-b3b5-28097628ac44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:00.344687 containerd[1868]: time="2026-01-17T00:03:00.344434235Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:00.347740 containerd[1868]: time="2026-01-17T00:03:00.347642155Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:03:00.347740 containerd[1868]: time="2026-01-17T00:03:00.347716395Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:03:00.348025 kubelet[3363]: E0117 00:03:00.347984 3363 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:03:00.348111 kubelet[3363]: E0117 00:03:00.348035 3363 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:03:00.348389 kubelet[3363]: E0117 00:03:00.348285 3363 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2mg8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cb7f6dddc-pk2n6_calico-apiserver(6139bf28-5324-4c65-a1a9-809ea0e0b5cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:00.348524 containerd[1868]: time="2026-01-17T00:03:00.348357274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:03:00.349662 kubelet[3363]: E0117 00:03:00.349600 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7f6dddc-pk2n6" podUID="6139bf28-5324-4c65-a1a9-809ea0e0b5cf" Jan 17 00:03:00.640302 containerd[1868]: time="2026-01-17T00:03:00.640070263Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:00.644122 containerd[1868]: time="2026-01-17T00:03:00.644026942Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:03:00.644122 containerd[1868]: time="2026-01-17T00:03:00.644100502Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:03:00.644408 kubelet[3363]: E0117 00:03:00.644365 3363 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:03:00.644462 kubelet[3363]: E0117 00:03:00.644416 3363 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:03:00.644564 kubelet[3363]: E0117 00:03:00.644520 3363 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zqshz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6zk6p_calico-system(e5fabfab-a45e-49bd-b3b5-28097628ac44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:00.645819 kubelet[3363]: E0117 00:03:00.645777 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6zk6p" podUID="e5fabfab-a45e-49bd-b3b5-28097628ac44" Jan 17 00:03:00.988736 waagent[2046]: 2026-01-17T00:03:00.987889Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 17 00:03:00.999812 waagent[2046]: 2026-01-17T00:03:00.998811Z INFO ExtHandler Jan 17 00:03:00.999812 waagent[2046]: 2026-01-17T00:03:00.998921Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: df92e99d-193b-40f5-ae19-d16c6d3e7265 eTag: 1688846199061281646 source: Fabric] Jan 17 00:03:00.999812 waagent[2046]: 2026-01-17T00:03:00.999296Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 17 00:03:01.000287 waagent[2046]: 2026-01-17T00:03:01.000242Z INFO ExtHandler Jan 17 00:03:01.001839 waagent[2046]: 2026-01-17T00:03:01.001155Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 17 00:03:01.067383 waagent[2046]: 2026-01-17T00:03:01.067331Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 17 00:03:01.194916 waagent[2046]: 2026-01-17T00:03:01.194810Z INFO ExtHandler Downloaded certificate {'thumbprint': '625A20A31B93060F532103968457E53B2569A52F', 'hasPrivateKey': True} Jan 17 00:03:01.198282 waagent[2046]: 2026-01-17T00:03:01.196972Z INFO ExtHandler Fetch goal state completed Jan 17 00:03:01.198779 waagent[2046]: 2026-01-17T00:03:01.198738Z INFO ExtHandler ExtHandler Jan 17 00:03:01.198931 waagent[2046]: 2026-01-17T00:03:01.198896Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 6a7d84e9-eadf-4c61-8531-27fa0f42f2f6 correlation 17fd6630-1b3b-4cee-af8d-0cc31beba16f created: 2026-01-17T00:02:54.940147Z] Jan 17 00:03:01.201399 waagent[2046]: 2026-01-17T00:03:01.201358Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 17 00:03:01.204186 waagent[2046]: 2026-01-17T00:03:01.201994Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 3 ms] Jan 17 00:03:01.505811 containerd[1868]: time="2026-01-17T00:03:01.505587271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:03:01.770413 containerd[1868]: time="2026-01-17T00:03:01.770287224Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:01.776762 containerd[1868]: time="2026-01-17T00:03:01.776714223Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:03:01.776847 containerd[1868]: time="2026-01-17T00:03:01.776820863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:03:01.776970 kubelet[3363]: E0117 00:03:01.776933 3363 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:03:01.777330 kubelet[3363]: E0117 00:03:01.776978 3363 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:03:01.777330 kubelet[3363]: E0117 00:03:01.777095 3363 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sdr7c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cb7f6dddc-5gx8p_calico-apiserver(7e6f1f60-5b4d-4f6a-92be-ef48b02574bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:01.779321 kubelet[3363]: E0117 00:03:01.779284 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7f6dddc-5gx8p" podUID="7e6f1f60-5b4d-4f6a-92be-ef48b02574bd" Jan 17 00:03:04.506725 kubelet[3363]: E0117 00:03:04.506658 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8f9bbb7c5-bvnkf" podUID="a8e344f5-3821-4ec4-a6da-be956667501d" Jan 17 00:03:10.748959 systemd[1]: run-containerd-runc-k8s.io-8f49fa13d107342d447ed10e9dd9112cef6e960cba5e6ad203e5130d48433a71-runc.jPSqji.mount: Deactivated successfully. Jan 17 00:03:12.508038 kubelet[3363]: E0117 00:03:12.507715 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-854949db7b-nkqdw" podUID="d6662ed8-4409-4f39-bb3b-ba711a87545b" Jan 17 00:03:12.513188 kubelet[3363]: E0117 00:03:12.510270 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7f6dddc-5gx8p" podUID="7e6f1f60-5b4d-4f6a-92be-ef48b02574bd" Jan 17 00:03:12.513985 kubelet[3363]: E0117 00:03:12.513924 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6zk6p" podUID="e5fabfab-a45e-49bd-b3b5-28097628ac44" Jan 17 00:03:13.505661 kubelet[3363]: E0117 00:03:13.504737 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7f6dddc-pk2n6" podUID="6139bf28-5324-4c65-a1a9-809ea0e0b5cf" Jan 17 00:03:15.505680 kubelet[3363]: E0117 00:03:15.505637 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gjjfd" podUID="7edf5fdf-55d5-4ab4-bb24-67c10b2d9654" Jan 17 00:03:15.924243 systemd[1]: Started sshd@7-10.200.20.34:22-10.200.16.10:47274.service - OpenSSH per-connection server daemon (10.200.16.10:47274). Jan 17 00:03:16.432203 sshd[6094]: Accepted publickey for core from 10.200.16.10 port 47274 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:03:16.468444 sshd[6094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:03:16.485923 systemd-logind[1817]: New session 10 of user core. Jan 17 00:03:16.488400 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:03:16.510211 kubelet[3363]: E0117 00:03:16.509422 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8f9bbb7c5-bvnkf" podUID="a8e344f5-3821-4ec4-a6da-be956667501d" Jan 17 00:03:16.869494 sshd[6094]: pam_unix(sshd:session): session closed for user core Jan 17 00:03:16.879492 systemd[1]: sshd@7-10.200.20.34:22-10.200.16.10:47274.service: Deactivated successfully. Jan 17 00:03:16.886108 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:03:16.889140 systemd-logind[1817]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:03:16.890103 systemd-logind[1817]: Removed session 10. Jan 17 00:03:21.965452 systemd[1]: Started sshd@8-10.200.20.34:22-10.200.16.10:49086.service - OpenSSH per-connection server daemon (10.200.16.10:49086). Jan 17 00:03:22.464934 sshd[6114]: Accepted publickey for core from 10.200.16.10 port 49086 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:03:22.469100 sshd[6114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:03:22.478271 systemd-logind[1817]: New session 11 of user core. Jan 17 00:03:22.483770 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:03:22.917611 sshd[6114]: pam_unix(sshd:session): session closed for user core Jan 17 00:03:22.922198 systemd[1]: sshd@8-10.200.20.34:22-10.200.16.10:49086.service: Deactivated successfully. Jan 17 00:03:22.925384 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:03:22.926745 systemd-logind[1817]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:03:22.928357 systemd-logind[1817]: Removed session 11. Jan 17 00:03:23.506237 kubelet[3363]: E0117 00:03:23.505983 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7f6dddc-5gx8p" podUID="7e6f1f60-5b4d-4f6a-92be-ef48b02574bd" Jan 17 00:03:25.511093 kubelet[3363]: E0117 00:03:25.510566 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6zk6p" podUID="e5fabfab-a45e-49bd-b3b5-28097628ac44" Jan 17 00:03:26.504101 kubelet[3363]: E0117 00:03:26.504046 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7f6dddc-pk2n6" podUID="6139bf28-5324-4c65-a1a9-809ea0e0b5cf" Jan 17 00:03:27.505299 kubelet[3363]: E0117 00:03:27.504757 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-854949db7b-nkqdw" podUID="d6662ed8-4409-4f39-bb3b-ba711a87545b" Jan 17 00:03:28.005779 systemd[1]: Started sshd@9-10.200.20.34:22-10.200.16.10:49096.service - OpenSSH per-connection server daemon (10.200.16.10:49096). Jan 17 00:03:28.505206 kubelet[3363]: E0117 00:03:28.504401 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gjjfd" podUID="7edf5fdf-55d5-4ab4-bb24-67c10b2d9654" Jan 17 00:03:28.508248 sshd[6140]: Accepted publickey for core from 10.200.16.10 port 49096 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:03:28.511483 sshd[6140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:03:28.516694 systemd-logind[1817]: New session 12 of user core. Jan 17 00:03:28.522438 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:03:28.920127 sshd[6140]: pam_unix(sshd:session): session closed for user core Jan 17 00:03:28.928782 systemd-logind[1817]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:03:28.929031 systemd[1]: sshd@9-10.200.20.34:22-10.200.16.10:49096.service: Deactivated successfully. Jan 17 00:03:28.935662 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:03:28.938672 systemd-logind[1817]: Removed session 12. Jan 17 00:03:29.001201 systemd[1]: Started sshd@10-10.200.20.34:22-10.200.16.10:49110.service - OpenSSH per-connection server daemon (10.200.16.10:49110). Jan 17 00:03:29.449489 sshd[6155]: Accepted publickey for core from 10.200.16.10 port 49110 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:03:29.452454 sshd[6155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:03:29.461722 systemd-logind[1817]: New session 13 of user core. Jan 17 00:03:29.470413 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:03:29.926759 sshd[6155]: pam_unix(sshd:session): session closed for user core Jan 17 00:03:29.932166 systemd[1]: sshd@10-10.200.20.34:22-10.200.16.10:49110.service: Deactivated successfully. Jan 17 00:03:29.937103 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:03:29.940429 systemd-logind[1817]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:03:29.941412 systemd-logind[1817]: Removed session 13. Jan 17 00:03:30.013392 systemd[1]: Started sshd@11-10.200.20.34:22-10.200.16.10:45200.service - OpenSSH per-connection server daemon (10.200.16.10:45200). Jan 17 00:03:30.509183 sshd[6170]: Accepted publickey for core from 10.200.16.10 port 45200 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:03:30.512107 sshd[6170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:03:30.519446 systemd-logind[1817]: New session 14 of user core. Jan 17 00:03:30.524755 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:03:30.958082 sshd[6170]: pam_unix(sshd:session): session closed for user core Jan 17 00:03:30.965381 systemd-logind[1817]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:03:30.966010 systemd[1]: sshd@11-10.200.20.34:22-10.200.16.10:45200.service: Deactivated successfully. Jan 17 00:03:30.970649 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:03:30.971759 systemd-logind[1817]: Removed session 14. Jan 17 00:03:31.505213 containerd[1868]: time="2026-01-17T00:03:31.504469463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:03:31.748120 containerd[1868]: time="2026-01-17T00:03:31.748076010Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:31.751665 containerd[1868]: time="2026-01-17T00:03:31.751630009Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:03:31.751808 containerd[1868]: time="2026-01-17T00:03:31.751721969Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:03:31.752001 kubelet[3363]: E0117 00:03:31.751959 3363 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:03:31.752348 kubelet[3363]: E0117 00:03:31.752011 3363 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:03:31.761834 kubelet[3363]: E0117 00:03:31.761735 3363 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2f3b6028e4fe4b7a9477513ac0bfee1b,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-skdtr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8f9bbb7c5-bvnkf_calico-system(a8e344f5-3821-4ec4-a6da-be956667501d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:31.766264 containerd[1868]: time="2026-01-17T00:03:31.765249366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:03:32.033878 containerd[1868]: time="2026-01-17T00:03:32.033834947Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:32.037314 containerd[1868]: time="2026-01-17T00:03:32.037272067Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:03:32.038001 containerd[1868]: time="2026-01-17T00:03:32.037370427Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:03:32.038048 kubelet[3363]: E0117 00:03:32.037510 3363 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:03:32.038048 kubelet[3363]: E0117 00:03:32.037556 3363 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:03:32.038048 kubelet[3363]: E0117 00:03:32.037657 3363 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-skdtr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8f9bbb7c5-bvnkf_calico-system(a8e344f5-3821-4ec4-a6da-be956667501d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:32.039112 kubelet[3363]: E0117 00:03:32.039077 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8f9bbb7c5-bvnkf" podUID="a8e344f5-3821-4ec4-a6da-be956667501d" Jan 17 00:03:35.506589 kubelet[3363]: E0117 00:03:35.505683 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7f6dddc-5gx8p" podUID="7e6f1f60-5b4d-4f6a-92be-ef48b02574bd" Jan 17 00:03:36.043724 systemd[1]: Started sshd@12-10.200.20.34:22-10.200.16.10:45214.service - OpenSSH per-connection server daemon (10.200.16.10:45214). Jan 17 00:03:36.491636 sshd[6194]: Accepted publickey for core from 10.200.16.10 port 45214 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:03:36.492768 sshd[6194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:03:36.496403 systemd-logind[1817]: New session 15 of user core. Jan 17 00:03:36.506450 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:03:36.898126 sshd[6194]: pam_unix(sshd:session): session closed for user core Jan 17 00:03:36.905850 systemd-logind[1817]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:03:36.906541 systemd[1]: sshd@12-10.200.20.34:22-10.200.16.10:45214.service: Deactivated successfully. Jan 17 00:03:36.912670 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:03:36.916701 systemd-logind[1817]: Removed session 15. Jan 17 00:03:37.505631 kubelet[3363]: E0117 00:03:37.505222 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6zk6p" podUID="e5fabfab-a45e-49bd-b3b5-28097628ac44" Jan 17 00:03:40.505186 containerd[1868]: time="2026-01-17T00:03:40.504502631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:03:40.746999 systemd[1]: run-containerd-runc-k8s.io-8f49fa13d107342d447ed10e9dd9112cef6e960cba5e6ad203e5130d48433a71-runc.eFU6gK.mount: Deactivated successfully. Jan 17 00:03:40.749701 containerd[1868]: time="2026-01-17T00:03:40.749654979Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:40.753059 containerd[1868]: time="2026-01-17T00:03:40.752991539Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:03:40.753144 containerd[1868]: time="2026-01-17T00:03:40.753100699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:03:40.753300 kubelet[3363]: E0117 00:03:40.753259 3363 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:03:40.753696 kubelet[3363]: E0117 00:03:40.753313 3363 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:03:40.753696 kubelet[3363]: E0117 00:03:40.753438 3363 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c2h2r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-854949db7b-nkqdw_calico-system(d6662ed8-4409-4f39-bb3b-ba711a87545b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:40.754881 kubelet[3363]: E0117 00:03:40.754841 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-854949db7b-nkqdw" podUID="d6662ed8-4409-4f39-bb3b-ba711a87545b" Jan 17 00:03:41.506882 containerd[1868]: time="2026-01-17T00:03:41.506804140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:03:41.911296 containerd[1868]: time="2026-01-17T00:03:41.911233775Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:41.917120 containerd[1868]: time="2026-01-17T00:03:41.917073093Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:03:41.917819 containerd[1868]: time="2026-01-17T00:03:41.917176933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:03:41.917884 kubelet[3363]: E0117 00:03:41.917297 3363 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:03:41.917884 kubelet[3363]: E0117 00:03:41.917344 3363 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:03:41.917884 kubelet[3363]: E0117 00:03:41.917449 3363 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2mg8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cb7f6dddc-pk2n6_calico-apiserver(6139bf28-5324-4c65-a1a9-809ea0e0b5cf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:41.918921 kubelet[3363]: E0117 00:03:41.918882 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7f6dddc-pk2n6" podUID="6139bf28-5324-4c65-a1a9-809ea0e0b5cf" Jan 17 00:03:41.984748 systemd[1]: Started sshd@13-10.200.20.34:22-10.200.16.10:52142.service - OpenSSH per-connection server daemon (10.200.16.10:52142). Jan 17 00:03:42.488939 sshd[6233]: Accepted publickey for core from 10.200.16.10 port 52142 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:03:42.490578 sshd[6233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:03:42.494848 systemd-logind[1817]: New session 16 of user core. Jan 17 00:03:42.500841 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:03:42.508165 containerd[1868]: time="2026-01-17T00:03:42.507893889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:03:42.784839 containerd[1868]: time="2026-01-17T00:03:42.784791551Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:42.791289 containerd[1868]: time="2026-01-17T00:03:42.791237549Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:03:42.791444 containerd[1868]: time="2026-01-17T00:03:42.791349229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:03:42.791659 kubelet[3363]: E0117 00:03:42.791615 3363 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:03:42.791726 kubelet[3363]: E0117 00:03:42.791671 3363 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:03:42.791836 kubelet[3363]: E0117 00:03:42.791792 3363 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ph8gt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-gjjfd_calico-system(7edf5fdf-55d5-4ab4-bb24-67c10b2d9654): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:42.793353 kubelet[3363]: E0117 00:03:42.793301 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gjjfd" podUID="7edf5fdf-55d5-4ab4-bb24-67c10b2d9654" Jan 17 00:03:42.909005 sshd[6233]: pam_unix(sshd:session): session closed for user core Jan 17 00:03:42.912222 systemd[1]: sshd@13-10.200.20.34:22-10.200.16.10:52142.service: Deactivated successfully. Jan 17 00:03:42.916729 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:03:42.917348 systemd-logind[1817]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:03:42.919442 systemd-logind[1817]: Removed session 16. Jan 17 00:03:43.505097 kubelet[3363]: E0117 00:03:43.504999 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8f9bbb7c5-bvnkf" podUID="a8e344f5-3821-4ec4-a6da-be956667501d" Jan 17 00:03:47.983453 systemd[1]: Started sshd@14-10.200.20.34:22-10.200.16.10:52150.service - OpenSSH per-connection server daemon (10.200.16.10:52150). Jan 17 00:03:48.438192 sshd[6254]: Accepted publickey for core from 10.200.16.10 port 52150 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:03:48.439257 sshd[6254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:03:48.444888 systemd-logind[1817]: New session 17 of user core. Jan 17 00:03:48.449681 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:03:48.894206 sshd[6254]: pam_unix(sshd:session): session closed for user core Jan 17 00:03:48.897363 systemd[1]: sshd@14-10.200.20.34:22-10.200.16.10:52150.service: Deactivated successfully. Jan 17 00:03:48.901921 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:03:48.902691 systemd-logind[1817]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:03:48.903507 systemd-logind[1817]: Removed session 17. Jan 17 00:03:50.504913 containerd[1868]: time="2026-01-17T00:03:50.504863211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:03:50.745529 containerd[1868]: time="2026-01-17T00:03:50.745416570Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:50.750223 containerd[1868]: time="2026-01-17T00:03:50.748830609Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:03:50.750223 containerd[1868]: time="2026-01-17T00:03:50.748900449Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:03:50.750352 kubelet[3363]: E0117 00:03:50.749067 3363 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:03:50.750352 kubelet[3363]: E0117 00:03:50.749127 3363 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:03:50.750352 kubelet[3363]: E0117 00:03:50.749280 3363 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sdr7c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7cb7f6dddc-5gx8p_calico-apiserver(7e6f1f60-5b4d-4f6a-92be-ef48b02574bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:50.751352 kubelet[3363]: E0117 00:03:50.751301 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7f6dddc-5gx8p" podUID="7e6f1f60-5b4d-4f6a-92be-ef48b02574bd" Jan 17 00:03:51.506513 containerd[1868]: time="2026-01-17T00:03:51.506471320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:03:51.508536 kubelet[3363]: E0117 00:03:51.507367 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-854949db7b-nkqdw" podUID="d6662ed8-4409-4f39-bb3b-ba711a87545b" Jan 17 00:03:51.796115 containerd[1868]: time="2026-01-17T00:03:51.796064030Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:51.799433 containerd[1868]: time="2026-01-17T00:03:51.799383790Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:03:51.799537 containerd[1868]: time="2026-01-17T00:03:51.799494350Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:03:51.800493 kubelet[3363]: E0117 00:03:51.800286 3363 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:03:51.800493 kubelet[3363]: E0117 00:03:51.800337 3363 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:03:51.800493 kubelet[3363]: E0117 00:03:51.800443 3363 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zqshz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6zk6p_calico-system(e5fabfab-a45e-49bd-b3b5-28097628ac44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:51.803975 containerd[1868]: time="2026-01-17T00:03:51.803784429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:03:52.072194 containerd[1868]: time="2026-01-17T00:03:52.071952063Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:52.075486 containerd[1868]: time="2026-01-17T00:03:52.075388423Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:03:52.075486 containerd[1868]: time="2026-01-17T00:03:52.075442623Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:03:52.076129 kubelet[3363]: E0117 00:03:52.075735 3363 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:03:52.076129 kubelet[3363]: E0117 00:03:52.075780 3363 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:03:52.076129 kubelet[3363]: E0117 00:03:52.075881 3363 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zqshz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6zk6p_calico-system(e5fabfab-a45e-49bd-b3b5-28097628ac44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:52.077287 kubelet[3363]: E0117 00:03:52.077245 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6zk6p" podUID="e5fabfab-a45e-49bd-b3b5-28097628ac44" Jan 17 00:03:53.984393 systemd[1]: Started sshd@15-10.200.20.34:22-10.200.16.10:36962.service - OpenSSH per-connection server daemon (10.200.16.10:36962). Jan 17 00:03:54.467011 sshd[6282]: Accepted publickey for core from 10.200.16.10 port 36962 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:03:54.468768 sshd[6282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:03:54.473806 systemd-logind[1817]: New session 18 of user core. Jan 17 00:03:54.479410 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:03:54.890138 sshd[6282]: pam_unix(sshd:session): session closed for user core Jan 17 00:03:54.896875 systemd[1]: sshd@15-10.200.20.34:22-10.200.16.10:36962.service: Deactivated successfully. Jan 17 00:03:54.902899 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:03:54.904252 systemd-logind[1817]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:03:54.907280 systemd-logind[1817]: Removed session 18. Jan 17 00:03:54.976266 systemd[1]: Started sshd@16-10.200.20.34:22-10.200.16.10:36974.service - OpenSSH per-connection server daemon (10.200.16.10:36974). Jan 17 00:03:55.471183 sshd[6296]: Accepted publickey for core from 10.200.16.10 port 36974 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:03:55.472645 sshd[6296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:03:55.478263 systemd-logind[1817]: New session 19 of user core. Jan 17 00:03:55.484466 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:03:55.504925 kubelet[3363]: E0117 00:03:55.504782 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7f6dddc-pk2n6" podUID="6139bf28-5324-4c65-a1a9-809ea0e0b5cf" Jan 17 00:03:56.004581 sshd[6296]: pam_unix(sshd:session): session closed for user core Jan 17 00:03:56.010437 systemd-logind[1817]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:03:56.010581 systemd[1]: sshd@16-10.200.20.34:22-10.200.16.10:36974.service: Deactivated successfully. Jan 17 00:03:56.012028 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:03:56.014676 systemd-logind[1817]: Removed session 19. Jan 17 00:03:56.087444 systemd[1]: Started sshd@17-10.200.20.34:22-10.200.16.10:36984.service - OpenSSH per-connection server daemon (10.200.16.10:36984). Jan 17 00:03:56.506152 kubelet[3363]: E0117 00:03:56.505944 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8f9bbb7c5-bvnkf" podUID="a8e344f5-3821-4ec4-a6da-be956667501d" Jan 17 00:03:56.563132 sshd[6308]: Accepted publickey for core from 10.200.16.10 port 36984 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:03:56.564018 sshd[6308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:03:56.568378 systemd-logind[1817]: New session 20 of user core. Jan 17 00:03:56.573539 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:03:57.663233 sshd[6308]: pam_unix(sshd:session): session closed for user core Jan 17 00:03:57.670090 systemd[1]: sshd@17-10.200.20.34:22-10.200.16.10:36984.service: Deactivated successfully. Jan 17 00:03:57.672335 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:03:57.673806 systemd-logind[1817]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:03:57.678337 systemd-logind[1817]: Removed session 20. Jan 17 00:03:57.751488 systemd[1]: Started sshd@18-10.200.20.34:22-10.200.16.10:36988.service - OpenSSH per-connection server daemon (10.200.16.10:36988). Jan 17 00:03:58.207167 sshd[6327]: Accepted publickey for core from 10.200.16.10 port 36988 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:03:58.210777 sshd[6327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:03:58.220949 systemd-logind[1817]: New session 21 of user core. Jan 17 00:03:58.228599 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:03:58.504258 kubelet[3363]: E0117 00:03:58.504034 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gjjfd" podUID="7edf5fdf-55d5-4ab4-bb24-67c10b2d9654" Jan 17 00:03:58.736761 sshd[6327]: pam_unix(sshd:session): session closed for user core Jan 17 00:03:58.739553 systemd-logind[1817]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:03:58.739791 systemd[1]: sshd@18-10.200.20.34:22-10.200.16.10:36988.service: Deactivated successfully. Jan 17 00:03:58.743013 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:03:58.744758 systemd-logind[1817]: Removed session 21. Jan 17 00:03:58.816724 systemd[1]: Started sshd@19-10.200.20.34:22-10.200.16.10:37004.service - OpenSSH per-connection server daemon (10.200.16.10:37004). Jan 17 00:03:59.262721 sshd[6339]: Accepted publickey for core from 10.200.16.10 port 37004 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:03:59.264922 sshd[6339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:03:59.278273 systemd-logind[1817]: New session 22 of user core. Jan 17 00:03:59.281154 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:03:59.673499 sshd[6339]: pam_unix(sshd:session): session closed for user core Jan 17 00:03:59.681759 systemd[1]: sshd@19-10.200.20.34:22-10.200.16.10:37004.service: Deactivated successfully. Jan 17 00:03:59.687783 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:03:59.690116 systemd-logind[1817]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:03:59.692437 systemd-logind[1817]: Removed session 22. Jan 17 00:04:01.507705 kubelet[3363]: E0117 00:04:01.507599 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7f6dddc-5gx8p" podUID="7e6f1f60-5b4d-4f6a-92be-ef48b02574bd" Jan 17 00:04:02.509548 kubelet[3363]: E0117 00:04:02.509485 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6zk6p" podUID="e5fabfab-a45e-49bd-b3b5-28097628ac44" Jan 17 00:04:04.505162 kubelet[3363]: E0117 00:04:04.505090 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-854949db7b-nkqdw" podUID="d6662ed8-4409-4f39-bb3b-ba711a87545b" Jan 17 00:04:04.759084 systemd[1]: Started sshd@20-10.200.20.34:22-10.200.16.10:36748.service - OpenSSH per-connection server daemon (10.200.16.10:36748). Jan 17 00:04:05.240550 sshd[6354]: Accepted publickey for core from 10.200.16.10 port 36748 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:04:05.241850 sshd[6354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:05.245418 systemd-logind[1817]: New session 23 of user core. Jan 17 00:04:05.250613 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 00:04:05.732498 sshd[6354]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:05.738697 systemd[1]: sshd@20-10.200.20.34:22-10.200.16.10:36748.service: Deactivated successfully. Jan 17 00:04:05.738975 systemd-logind[1817]: Session 23 logged out. Waiting for processes to exit. Jan 17 00:04:05.741089 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 00:04:05.747409 systemd-logind[1817]: Removed session 23. Jan 17 00:04:06.504061 kubelet[3363]: E0117 00:04:06.504014 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7f6dddc-pk2n6" podUID="6139bf28-5324-4c65-a1a9-809ea0e0b5cf" Jan 17 00:04:09.508197 kubelet[3363]: E0117 00:04:09.507494 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8f9bbb7c5-bvnkf" podUID="a8e344f5-3821-4ec4-a6da-be956667501d" Jan 17 00:04:10.830952 systemd[1]: Started sshd@21-10.200.20.34:22-10.200.16.10:49500.service - OpenSSH per-connection server daemon (10.200.16.10:49500). Jan 17 00:04:11.331425 sshd[6391]: Accepted publickey for core from 10.200.16.10 port 49500 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:04:11.333146 sshd[6391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:11.339896 systemd-logind[1817]: New session 24 of user core. Jan 17 00:04:11.351006 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 00:04:11.506815 kubelet[3363]: E0117 00:04:11.505866 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gjjfd" podUID="7edf5fdf-55d5-4ab4-bb24-67c10b2d9654" Jan 17 00:04:11.744914 sshd[6391]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:11.747687 systemd[1]: sshd@21-10.200.20.34:22-10.200.16.10:49500.service: Deactivated successfully. Jan 17 00:04:11.751602 systemd-logind[1817]: Session 24 logged out. Waiting for processes to exit. Jan 17 00:04:11.752110 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 00:04:11.753803 systemd-logind[1817]: Removed session 24. Jan 17 00:04:15.506426 kubelet[3363]: E0117 00:04:15.506332 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6zk6p" podUID="e5fabfab-a45e-49bd-b3b5-28097628ac44" Jan 17 00:04:16.504355 kubelet[3363]: E0117 00:04:16.504317 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7f6dddc-5gx8p" podUID="7e6f1f60-5b4d-4f6a-92be-ef48b02574bd" Jan 17 00:04:16.836646 systemd[1]: Started sshd@22-10.200.20.34:22-10.200.16.10:49502.service - OpenSSH per-connection server daemon (10.200.16.10:49502). Jan 17 00:04:17.324621 sshd[6405]: Accepted publickey for core from 10.200.16.10 port 49502 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:04:17.328490 sshd[6405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:17.336035 systemd-logind[1817]: New session 25 of user core. Jan 17 00:04:17.343151 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 00:04:17.749383 sshd[6405]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:17.752468 systemd-logind[1817]: Session 25 logged out. Waiting for processes to exit. Jan 17 00:04:17.754052 systemd[1]: sshd@22-10.200.20.34:22-10.200.16.10:49502.service: Deactivated successfully. Jan 17 00:04:17.757764 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 00:04:17.758727 systemd-logind[1817]: Removed session 25. Jan 17 00:04:19.504651 kubelet[3363]: E0117 00:04:19.504529 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-854949db7b-nkqdw" podUID="d6662ed8-4409-4f39-bb3b-ba711a87545b" Jan 17 00:04:20.505481 kubelet[3363]: E0117 00:04:20.505402 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8f9bbb7c5-bvnkf" podUID="a8e344f5-3821-4ec4-a6da-be956667501d" Jan 17 00:04:21.504422 kubelet[3363]: E0117 00:04:21.504099 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7f6dddc-pk2n6" podUID="6139bf28-5324-4c65-a1a9-809ea0e0b5cf" Jan 17 00:04:22.504003 kubelet[3363]: E0117 00:04:22.503921 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-gjjfd" podUID="7edf5fdf-55d5-4ab4-bb24-67c10b2d9654" Jan 17 00:04:22.826486 systemd[1]: Started sshd@23-10.200.20.34:22-10.200.16.10:48880.service - OpenSSH per-connection server daemon (10.200.16.10:48880). Jan 17 00:04:23.275518 sshd[6418]: Accepted publickey for core from 10.200.16.10 port 48880 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:04:23.278845 sshd[6418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:23.287470 systemd-logind[1817]: New session 26 of user core. Jan 17 00:04:23.298439 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 00:04:23.690155 sshd[6418]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:23.700704 systemd[1]: sshd@23-10.200.20.34:22-10.200.16.10:48880.service: Deactivated successfully. Jan 17 00:04:23.702369 systemd-logind[1817]: Session 26 logged out. Waiting for processes to exit. Jan 17 00:04:23.707668 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 00:04:23.710613 systemd-logind[1817]: Removed session 26. Jan 17 00:04:28.774418 systemd[1]: Started sshd@24-10.200.20.34:22-10.200.16.10:48894.service - OpenSSH per-connection server daemon (10.200.16.10:48894). Jan 17 00:04:29.244468 sshd[6432]: Accepted publickey for core from 10.200.16.10 port 48894 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:04:29.245728 sshd[6432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:29.249884 systemd-logind[1817]: New session 27 of user core. Jan 17 00:04:29.258466 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 17 00:04:29.509618 kubelet[3363]: E0117 00:04:29.507663 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6zk6p" podUID="e5fabfab-a45e-49bd-b3b5-28097628ac44" Jan 17 00:04:29.662201 sshd[6432]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:29.669127 systemd[1]: sshd@24-10.200.20.34:22-10.200.16.10:48894.service: Deactivated successfully. Jan 17 00:04:29.671214 systemd-logind[1817]: Session 27 logged out. Waiting for processes to exit. Jan 17 00:04:29.672796 systemd[1]: session-27.scope: Deactivated successfully. Jan 17 00:04:29.673661 systemd-logind[1817]: Removed session 27. Jan 17 00:04:30.505162 kubelet[3363]: E0117 00:04:30.504770 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7cb7f6dddc-5gx8p" podUID="7e6f1f60-5b4d-4f6a-92be-ef48b02574bd" Jan 17 00:04:31.506041 kubelet[3363]: E0117 00:04:31.505966 3363 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8f9bbb7c5-bvnkf" podUID="a8e344f5-3821-4ec4-a6da-be956667501d"