Jan 17 00:04:49.220496 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 17 00:04:49.220518 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 16 22:28:08 -00 2026 Jan 17 00:04:49.220526 kernel: KASLR enabled Jan 17 00:04:49.220532 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 17 00:04:49.220539 kernel: printk: bootconsole [pl11] enabled Jan 17 00:04:49.220545 kernel: efi: EFI v2.7 by EDK II Jan 17 00:04:49.220552 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 17 00:04:49.220558 kernel: random: crng init done Jan 17 00:04:49.220564 kernel: ACPI: Early table checksum verification disabled Jan 17 00:04:49.220570 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 17 00:04:49.220576 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:04:49.220582 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:04:49.220589 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 17 00:04:49.220595 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:04:49.220603 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:04:49.220609 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:04:49.220616 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:04:49.220624 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:04:49.220630 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:04:49.220636 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 17 00:04:49.220643 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:04:49.220649 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 17 00:04:49.220656 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 17 00:04:49.220662 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 17 00:04:49.220668 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 17 00:04:49.220675 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 17 00:04:49.220681 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 17 00:04:49.220687 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 17 00:04:49.220695 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 17 00:04:49.220701 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 17 00:04:49.220708 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 17 00:04:49.220714 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 17 00:04:49.220720 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 17 00:04:49.220727 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 17 00:04:49.220733 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 17 00:04:49.220739 kernel: Zone ranges: Jan 17 00:04:49.220745 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 17 00:04:49.220752 kernel: DMA32 empty Jan 17 00:04:49.220758 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 17 00:04:49.220764 kernel: Movable zone start for each node Jan 17 00:04:49.220775 kernel: Early memory node ranges Jan 17 00:04:49.220781 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 17 00:04:49.220788 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 17 00:04:49.220795 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 17 00:04:49.220802 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 17 00:04:49.220810 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 17 00:04:49.220817 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 17 00:04:49.220823 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 17 00:04:49.220831 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 17 00:04:49.220837 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 17 00:04:49.220844 kernel: psci: probing for conduit method from ACPI. Jan 17 00:04:49.220851 kernel: psci: PSCIv1.1 detected in firmware. Jan 17 00:04:49.220857 kernel: psci: Using standard PSCI v0.2 function IDs Jan 17 00:04:49.220864 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 17 00:04:49.220871 kernel: psci: SMC Calling Convention v1.4 Jan 17 00:04:49.220878 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 17 00:04:49.220884 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 17 00:04:49.220892 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 17 00:04:49.220899 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 17 00:04:49.220906 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 17 00:04:49.220913 kernel: Detected PIPT I-cache on CPU0 Jan 17 00:04:49.220919 kernel: CPU features: detected: GIC system register CPU interface Jan 17 00:04:49.220926 kernel: CPU features: detected: Hardware dirty bit management Jan 17 00:04:49.220933 kernel: CPU features: detected: Spectre-BHB Jan 17 00:04:49.220940 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 17 00:04:49.220947 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 17 00:04:49.220953 kernel: CPU features: detected: ARM erratum 1418040 Jan 17 00:04:49.220960 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 17 00:04:49.220968 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 17 00:04:49.220975 kernel: alternatives: applying boot alternatives Jan 17 00:04:49.220983 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 17 00:04:49.220990 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:04:49.220997 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:04:49.221004 kernel: Fallback order for Node 0: 0 Jan 17 00:04:49.221011 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 17 00:04:49.221017 kernel: Policy zone: Normal Jan 17 00:04:49.221024 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:04:49.221031 kernel: software IO TLB: area num 2. Jan 17 00:04:49.221038 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 17 00:04:49.221046 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Jan 17 00:04:49.221146 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:04:49.221153 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:04:49.221161 kernel: rcu: RCU event tracing is enabled. Jan 17 00:04:49.221168 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:04:49.221175 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:04:49.221181 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:04:49.221188 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:04:49.221195 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:04:49.221202 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 17 00:04:49.221209 kernel: GICv3: 960 SPIs implemented Jan 17 00:04:49.221218 kernel: GICv3: 0 Extended SPIs implemented Jan 17 00:04:49.221225 kernel: Root IRQ handler: gic_handle_irq Jan 17 00:04:49.221231 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 17 00:04:49.221238 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 17 00:04:49.221245 kernel: ITS: No ITS available, not enabling LPIs Jan 17 00:04:49.221252 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:04:49.221259 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 00:04:49.221266 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 17 00:04:49.221273 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 17 00:04:49.221280 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 17 00:04:49.221286 kernel: Console: colour dummy device 80x25 Jan 17 00:04:49.221295 kernel: printk: console [tty1] enabled Jan 17 00:04:49.221302 kernel: ACPI: Core revision 20230628 Jan 17 00:04:49.221309 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 17 00:04:49.221316 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:04:49.221323 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:04:49.221330 kernel: landlock: Up and running. Jan 17 00:04:49.221337 kernel: SELinux: Initializing. Jan 17 00:04:49.221345 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:04:49.221351 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:04:49.221360 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:04:49.221367 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:04:49.221374 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Jan 17 00:04:49.221381 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 17 00:04:49.221388 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 17 00:04:49.221395 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:04:49.221402 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:04:49.221410 kernel: Remapping and enabling EFI services. Jan 17 00:04:49.221422 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:04:49.221430 kernel: Detected PIPT I-cache on CPU1 Jan 17 00:04:49.221437 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 17 00:04:49.221445 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 00:04:49.221454 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 17 00:04:49.221461 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:04:49.221468 kernel: SMP: Total of 2 processors activated. Jan 17 00:04:49.221476 kernel: CPU features: detected: 32-bit EL0 Support Jan 17 00:04:49.221483 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 17 00:04:49.221492 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 17 00:04:49.221499 kernel: CPU features: detected: CRC32 instructions Jan 17 00:04:49.221507 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 17 00:04:49.221514 kernel: CPU features: detected: LSE atomic instructions Jan 17 00:04:49.221521 kernel: CPU features: detected: Privileged Access Never Jan 17 00:04:49.221529 kernel: CPU: All CPU(s) started at EL1 Jan 17 00:04:49.221536 kernel: alternatives: applying system-wide alternatives Jan 17 00:04:49.221543 kernel: devtmpfs: initialized Jan 17 00:04:49.221551 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:04:49.221559 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:04:49.221567 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:04:49.221574 kernel: SMBIOS 3.1.0 present. Jan 17 00:04:49.221581 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 17 00:04:49.221589 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:04:49.221596 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 17 00:04:49.221604 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 17 00:04:49.221612 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 17 00:04:49.221619 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:04:49.221627 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 17 00:04:49.221635 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:04:49.221642 kernel: cpuidle: using governor menu Jan 17 00:04:49.221649 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 17 00:04:49.221657 kernel: ASID allocator initialised with 32768 entries Jan 17 00:04:49.221664 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:04:49.221671 kernel: Serial: AMBA PL011 UART driver Jan 17 00:04:49.221678 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 17 00:04:49.221686 kernel: Modules: 0 pages in range for non-PLT usage Jan 17 00:04:49.221695 kernel: Modules: 509008 pages in range for PLT usage Jan 17 00:04:49.221702 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:04:49.221710 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:04:49.221717 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 17 00:04:49.221724 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 17 00:04:49.221732 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:04:49.221739 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:04:49.221746 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 17 00:04:49.221754 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 17 00:04:49.221762 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:04:49.221769 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:04:49.221777 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:04:49.221784 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:04:49.221791 kernel: ACPI: Interpreter enabled Jan 17 00:04:49.221799 kernel: ACPI: Using GIC for interrupt routing Jan 17 00:04:49.221806 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 17 00:04:49.221813 kernel: printk: console [ttyAMA0] enabled Jan 17 00:04:49.221820 kernel: printk: bootconsole [pl11] disabled Jan 17 00:04:49.221829 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 17 00:04:49.221837 kernel: iommu: Default domain type: Translated Jan 17 00:04:49.221844 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 17 00:04:49.221851 kernel: efivars: Registered efivars operations Jan 17 00:04:49.221859 kernel: vgaarb: loaded Jan 17 00:04:49.221866 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 17 00:04:49.221873 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:04:49.221880 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:04:49.221888 kernel: pnp: PnP ACPI init Jan 17 00:04:49.221896 kernel: pnp: PnP ACPI: found 0 devices Jan 17 00:04:49.221904 kernel: NET: Registered PF_INET protocol family Jan 17 00:04:49.221911 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:04:49.221918 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 00:04:49.221926 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:04:49.221933 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:04:49.221941 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 00:04:49.221948 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 00:04:49.221955 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:04:49.221964 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:04:49.221971 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:04:49.221978 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:04:49.221986 kernel: kvm [1]: HYP mode not available Jan 17 00:04:49.221993 kernel: Initialise system trusted keyrings Jan 17 00:04:49.222001 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 00:04:49.222008 kernel: Key type asymmetric registered Jan 17 00:04:49.222015 kernel: Asymmetric key parser 'x509' registered Jan 17 00:04:49.222022 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 17 00:04:49.222031 kernel: io scheduler mq-deadline registered Jan 17 00:04:49.222038 kernel: io scheduler kyber registered Jan 17 00:04:49.222046 kernel: io scheduler bfq registered Jan 17 00:04:49.222057 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:04:49.222065 kernel: thunder_xcv, ver 1.0 Jan 17 00:04:49.222072 kernel: thunder_bgx, ver 1.0 Jan 17 00:04:49.222079 kernel: nicpf, ver 1.0 Jan 17 00:04:49.222087 kernel: nicvf, ver 1.0 Jan 17 00:04:49.222220 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 17 00:04:49.222295 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-17T00:04:48 UTC (1768608288) Jan 17 00:04:49.222305 kernel: efifb: probing for efifb Jan 17 00:04:49.222312 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 17 00:04:49.222320 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 17 00:04:49.222327 kernel: efifb: scrolling: redraw Jan 17 00:04:49.222334 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 17 00:04:49.222342 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 00:04:49.222349 kernel: fb0: EFI VGA frame buffer device Jan 17 00:04:49.222358 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 17 00:04:49.222366 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 00:04:49.222373 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Jan 17 00:04:49.222380 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 17 00:04:49.222388 kernel: watchdog: Hard watchdog permanently disabled Jan 17 00:04:49.222395 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:04:49.222402 kernel: Segment Routing with IPv6 Jan 17 00:04:49.222410 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:04:49.222417 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:04:49.222425 kernel: Key type dns_resolver registered Jan 17 00:04:49.222433 kernel: registered taskstats version 1 Jan 17 00:04:49.222440 kernel: Loading compiled-in X.509 certificates Jan 17 00:04:49.222448 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 0aabad27df82424bfffc9b1a502a9ae84b35bad4' Jan 17 00:04:49.222458 kernel: Key type .fscrypt registered Jan 17 00:04:49.222467 kernel: Key type fscrypt-provisioning registered Jan 17 00:04:49.222475 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:04:49.222484 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:04:49.222492 kernel: ima: No architecture policies found Jan 17 00:04:49.222503 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 17 00:04:49.222511 kernel: clk: Disabling unused clocks Jan 17 00:04:49.222520 kernel: Freeing unused kernel memory: 39424K Jan 17 00:04:49.222528 kernel: Run /init as init process Jan 17 00:04:49.222537 kernel: with arguments: Jan 17 00:04:49.222544 kernel: /init Jan 17 00:04:49.222553 kernel: with environment: Jan 17 00:04:49.222561 kernel: HOME=/ Jan 17 00:04:49.222570 kernel: TERM=linux Jan 17 00:04:49.222580 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:04:49.222593 systemd[1]: Detected virtualization microsoft. Jan 17 00:04:49.222602 systemd[1]: Detected architecture arm64. Jan 17 00:04:49.222611 systemd[1]: Running in initrd. Jan 17 00:04:49.222619 systemd[1]: No hostname configured, using default hostname. Jan 17 00:04:49.222627 systemd[1]: Hostname set to . Jan 17 00:04:49.222635 systemd[1]: Initializing machine ID from random generator. Jan 17 00:04:49.222644 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:04:49.222653 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:04:49.222662 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:04:49.222672 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:04:49.222682 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:04:49.222691 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:04:49.222700 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:04:49.222711 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:04:49.222723 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:04:49.222732 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:04:49.222742 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:04:49.222750 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:04:49.222758 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:04:49.222766 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:04:49.222773 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:04:49.222782 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:04:49.222791 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:04:49.222799 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:04:49.222807 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:04:49.222815 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:04:49.222823 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:04:49.222831 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:04:49.222839 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:04:49.222847 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:04:49.222857 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:04:49.222865 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:04:49.222873 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:04:49.222880 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:04:49.222888 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:04:49.222912 systemd-journald[217]: Collecting audit messages is disabled. Jan 17 00:04:49.222933 systemd-journald[217]: Journal started Jan 17 00:04:49.222951 systemd-journald[217]: Runtime Journal (/run/log/journal/8b97ac4fb1a64c74af168cf50a3e0caf) is 8.0M, max 78.5M, 70.5M free. Jan 17 00:04:49.232279 systemd-modules-load[218]: Inserted module 'overlay' Jan 17 00:04:49.244424 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:04:49.253064 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:04:49.257066 kernel: Bridge firewalling registered Jan 17 00:04:49.257106 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:04:49.256486 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 17 00:04:49.268942 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:04:49.278286 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:04:49.284246 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:04:49.288477 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:04:49.302844 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:04:49.325312 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:04:49.332206 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:04:49.362289 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:04:49.369696 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:04:49.386947 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:04:49.403140 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:04:49.410062 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:04:49.418849 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:04:49.444314 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:04:49.456203 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:04:49.470767 dracut-cmdline[251]: dracut-dracut-053 Jan 17 00:04:49.470767 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 17 00:04:49.474036 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:04:49.510279 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:04:49.527346 systemd-resolved[255]: Positive Trust Anchors: Jan 17 00:04:49.527355 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:04:49.527387 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:04:49.529511 systemd-resolved[255]: Defaulting to hostname 'linux'. Jan 17 00:04:49.534671 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:04:49.539797 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:04:49.624075 kernel: SCSI subsystem initialized Jan 17 00:04:49.633061 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:04:49.642078 kernel: iscsi: registered transport (tcp) Jan 17 00:04:49.658591 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:04:49.658647 kernel: QLogic iSCSI HBA Driver Jan 17 00:04:49.703415 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:04:49.715536 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:04:49.742963 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:04:49.742995 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:04:49.748275 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:04:49.799068 kernel: raid6: neonx8 gen() 15788 MB/s Jan 17 00:04:49.813059 kernel: raid6: neonx4 gen() 15104 MB/s Jan 17 00:04:49.832057 kernel: raid6: neonx2 gen() 13223 MB/s Jan 17 00:04:49.852057 kernel: raid6: neonx1 gen() 10543 MB/s Jan 17 00:04:49.871082 kernel: raid6: int64x8 gen() 6972 MB/s Jan 17 00:04:49.890059 kernel: raid6: int64x4 gen() 7365 MB/s Jan 17 00:04:49.910058 kernel: raid6: int64x2 gen() 6145 MB/s Jan 17 00:04:49.932713 kernel: raid6: int64x1 gen() 5071 MB/s Jan 17 00:04:49.932724 kernel: raid6: using algorithm neonx8 gen() 15788 MB/s Jan 17 00:04:49.954594 kernel: raid6: .... xor() 12043 MB/s, rmw enabled Jan 17 00:04:49.954604 kernel: raid6: using neon recovery algorithm Jan 17 00:04:49.964369 kernel: xor: measuring software checksum speed Jan 17 00:04:49.964384 kernel: 8regs : 19812 MB/sec Jan 17 00:04:49.968033 kernel: 32regs : 19664 MB/sec Jan 17 00:04:49.970788 kernel: arm64_neon : 27061 MB/sec Jan 17 00:04:49.973967 kernel: xor: using function: arm64_neon (27061 MB/sec) Jan 17 00:04:50.024079 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:04:50.034315 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:04:50.049217 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:04:50.068730 systemd-udevd[437]: Using default interface naming scheme 'v255'. Jan 17 00:04:50.073028 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:04:50.094186 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:04:50.109342 dracut-pre-trigger[450]: rd.md=0: removing MD RAID activation Jan 17 00:04:50.137791 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:04:50.153514 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:04:50.192083 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:04:50.206257 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:04:50.227307 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:04:50.239232 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:04:50.250986 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:04:50.261951 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:04:50.277195 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:04:50.296021 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:04:50.321234 kernel: hv_vmbus: Vmbus version:5.3 Jan 17 00:04:50.321258 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 17 00:04:50.296202 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:04:50.344354 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 17 00:04:50.302449 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:04:50.310099 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:04:50.389613 kernel: hv_vmbus: registering driver hid_hyperv Jan 17 00:04:50.389634 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 17 00:04:50.389644 kernel: hv_vmbus: registering driver hv_storvsc Jan 17 00:04:50.389653 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 17 00:04:50.310292 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:04:50.426381 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 17 00:04:50.426405 kernel: scsi host0: storvsc_host_t Jan 17 00:04:50.426571 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 17 00:04:50.426670 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 17 00:04:50.426761 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 17 00:04:50.426858 kernel: hv_vmbus: registering driver hv_netvsc Jan 17 00:04:50.333207 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:04:50.439591 kernel: scsi host1: storvsc_host_t Jan 17 00:04:50.362450 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:04:50.376338 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:04:50.411682 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:04:50.450812 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:04:50.482350 kernel: PTP clock support registered Jan 17 00:04:50.488983 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:04:50.506093 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 17 00:04:50.506282 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 00:04:50.506294 kernel: hv_utils: Registering HyperV Utility Driver Jan 17 00:04:50.511637 kernel: hv_vmbus: registering driver hv_utils Jan 17 00:04:50.517824 kernel: hv_utils: Heartbeat IC version 3.0 Jan 17 00:04:50.517869 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 17 00:04:50.520726 kernel: hv_utils: Shutdown IC version 3.2 Jan 17 00:04:50.520742 kernel: hv_utils: TimeSync IC version 4.0 Jan 17 00:04:50.441874 systemd-resolved[255]: Clock change detected. Flushing caches. Jan 17 00:04:50.468969 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 17 00:04:50.472208 kernel: hv_netvsc 7ced8d78-aef6-7ced-8d78-aef67ced8d78 eth0: VF slot 1 added Jan 17 00:04:50.472320 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 17 00:04:50.472417 systemd-journald[217]: Time jumped backwards, rotating. Jan 17 00:04:50.472456 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 17 00:04:50.472544 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 17 00:04:50.472658 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 17 00:04:50.481778 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:04:50.481821 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 17 00:04:50.489214 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#222 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 17 00:04:50.499022 kernel: hv_vmbus: registering driver hv_pci Jan 17 00:04:50.499082 kernel: hv_pci 10c3d7b9-0826-48c1-bed0-48767d5790dd: PCI VMBus probing: Using version 0x10004 Jan 17 00:04:50.501736 kernel: hv_pci 10c3d7b9-0826-48c1-bed0-48767d5790dd: PCI host bridge to bus 0826:00 Jan 17 00:04:50.510493 kernel: pci_bus 0826:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 17 00:04:50.512096 kernel: pci_bus 0826:00: No busn resource found for root bus, will use [bus 00-ff] Jan 17 00:04:50.526221 kernel: pci 0826:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 17 00:04:50.532110 kernel: pci 0826:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 17 00:04:50.539209 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#205 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 17 00:04:50.539393 kernel: pci 0826:00:02.0: enabling Extended Tags Jan 17 00:04:50.558098 kernel: pci 0826:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 0826:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 17 00:04:50.568880 kernel: pci_bus 0826:00: busn_res: [bus 00-ff] end is updated to 00 Jan 17 00:04:50.569052 kernel: pci 0826:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 17 00:04:50.608749 kernel: mlx5_core 0826:00:02.0: enabling device (0000 -> 0002) Jan 17 00:04:50.615055 kernel: mlx5_core 0826:00:02.0: firmware version: 16.30.5026 Jan 17 00:04:50.812065 kernel: hv_netvsc 7ced8d78-aef6-7ced-8d78-aef67ced8d78 eth0: VF registering: eth1 Jan 17 00:04:50.812271 kernel: mlx5_core 0826:00:02.0 eth1: joined to eth0 Jan 17 00:04:50.823094 kernel: mlx5_core 0826:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 17 00:04:50.833059 kernel: mlx5_core 0826:00:02.0 enP2086s1: renamed from eth1 Jan 17 00:04:50.986222 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 17 00:04:51.016052 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 17 00:04:51.046386 kernel: BTRFS: device fsid 257557f7-4bf9-4b29-86df-93ad67770d31 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (489) Jan 17 00:04:51.059811 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 17 00:04:51.065837 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 17 00:04:51.093180 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (494) Jan 17 00:04:51.094290 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:04:51.116664 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 17 00:04:52.138109 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:04:52.138715 disk-uuid[608]: The operation has completed successfully. Jan 17 00:04:52.213678 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:04:52.215082 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:04:52.243195 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:04:52.253268 sh[667]: Success Jan 17 00:04:52.281207 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 17 00:04:52.562467 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:04:52.571182 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:04:52.577068 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:04:52.608606 kernel: BTRFS info (device dm-0): first mount of filesystem 257557f7-4bf9-4b29-86df-93ad67770d31 Jan 17 00:04:52.608647 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:04:52.614143 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:04:52.618127 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:04:52.621663 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:04:52.969741 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:04:52.974163 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:04:52.991209 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:04:52.997416 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:04:53.031072 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:04:53.031125 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:04:53.034863 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:04:53.073070 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:04:53.081547 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:04:53.092120 kernel: BTRFS info (device sda6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:04:53.100085 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:04:53.114275 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:04:53.119589 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:04:53.133213 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:04:53.168073 systemd-networkd[851]: lo: Link UP Jan 17 00:04:53.169081 systemd-networkd[851]: lo: Gained carrier Jan 17 00:04:53.170681 systemd-networkd[851]: Enumeration completed Jan 17 00:04:53.170883 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:04:53.171660 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:04:53.171664 systemd-networkd[851]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:04:53.179204 systemd[1]: Reached target network.target - Network. Jan 17 00:04:53.230063 kernel: mlx5_core 0826:00:02.0 enP2086s1: Link up Jan 17 00:04:53.265055 kernel: hv_netvsc 7ced8d78-aef6-7ced-8d78-aef67ced8d78 eth0: Data path switched to VF: enP2086s1 Jan 17 00:04:53.265611 systemd-networkd[851]: enP2086s1: Link UP Jan 17 00:04:53.265695 systemd-networkd[851]: eth0: Link UP Jan 17 00:04:53.265795 systemd-networkd[851]: eth0: Gained carrier Jan 17 00:04:53.265803 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:04:53.285811 systemd-networkd[851]: enP2086s1: Gained carrier Jan 17 00:04:53.296081 systemd-networkd[851]: eth0: DHCPv4 address 10.200.20.43/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 17 00:04:54.325345 ignition[849]: Ignition 2.19.0 Jan 17 00:04:54.325356 ignition[849]: Stage: fetch-offline Jan 17 00:04:54.328752 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:04:54.325394 ignition[849]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:04:54.325402 ignition[849]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:04:54.325504 ignition[849]: parsed url from cmdline: "" Jan 17 00:04:54.325507 ignition[849]: no config URL provided Jan 17 00:04:54.325511 ignition[849]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:04:54.351403 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:04:54.325518 ignition[849]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:04:54.325523 ignition[849]: failed to fetch config: resource requires networking Jan 17 00:04:54.325687 ignition[849]: Ignition finished successfully Jan 17 00:04:54.372844 ignition[863]: Ignition 2.19.0 Jan 17 00:04:54.372851 ignition[863]: Stage: fetch Jan 17 00:04:54.373014 ignition[863]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:04:54.373023 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:04:54.373120 ignition[863]: parsed url from cmdline: "" Jan 17 00:04:54.373124 ignition[863]: no config URL provided Jan 17 00:04:54.373128 ignition[863]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:04:54.373135 ignition[863]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:04:54.373155 ignition[863]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 17 00:04:54.479210 ignition[863]: GET result: OK Jan 17 00:04:54.481962 ignition[863]: config has been read from IMDS userdata Jan 17 00:04:54.482007 ignition[863]: parsing config with SHA512: 3334ccb7f714b75e56ffacb24b2d7b09fd650ef22554e6895880b246bf99eb9053292402cf797ec8d7ab050fb14d724b2e898b5b2beff7ff797206288c770847 Jan 17 00:04:54.487262 unknown[863]: fetched base config from "system" Jan 17 00:04:54.487275 unknown[863]: fetched base config from "system" Jan 17 00:04:54.487801 ignition[863]: fetch: fetch complete Jan 17 00:04:54.487280 unknown[863]: fetched user config from "azure" Jan 17 00:04:54.487809 ignition[863]: fetch: fetch passed Jan 17 00:04:54.494027 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:04:54.487850 ignition[863]: Ignition finished successfully Jan 17 00:04:54.507250 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:04:54.524667 ignition[869]: Ignition 2.19.0 Jan 17 00:04:54.524679 ignition[869]: Stage: kargs Jan 17 00:04:54.528579 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:04:54.524848 ignition[869]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:04:54.524859 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:04:54.545204 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:04:54.525945 ignition[869]: kargs: kargs passed Jan 17 00:04:54.525992 ignition[869]: Ignition finished successfully Jan 17 00:04:54.566176 ignition[876]: Ignition 2.19.0 Jan 17 00:04:54.566188 ignition[876]: Stage: disks Jan 17 00:04:54.570330 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:04:54.566356 ignition[876]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:04:54.566366 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:04:54.580887 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:04:54.567235 ignition[876]: disks: disks passed Jan 17 00:04:54.588963 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:04:54.567279 ignition[876]: Ignition finished successfully Jan 17 00:04:54.598751 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:04:54.607569 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:04:54.614794 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:04:54.636312 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:04:54.645402 systemd-networkd[851]: eth0: Gained IPv6LL Jan 17 00:04:54.709078 systemd-fsck[885]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 17 00:04:54.716840 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:04:54.730261 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:04:54.784124 kernel: EXT4-fs (sda9): mounted filesystem b70ce012-b356-4603-a688-ee0b3b7de551 r/w with ordered data mode. Quota mode: none. Jan 17 00:04:54.783997 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:04:54.788155 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:04:54.834120 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:04:54.852070 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (896) Jan 17 00:04:54.862014 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:04:54.862041 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:04:54.866748 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:04:54.868225 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:04:54.880066 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:04:54.882218 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 00:04:54.892833 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:04:54.893147 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:04:54.909491 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:04:54.917021 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:04:54.931219 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:04:55.606114 coreos-metadata[913]: Jan 17 00:04:55.606 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 17 00:04:55.614628 coreos-metadata[913]: Jan 17 00:04:55.614 INFO Fetch successful Jan 17 00:04:55.618862 coreos-metadata[913]: Jan 17 00:04:55.617 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 17 00:04:55.629559 coreos-metadata[913]: Jan 17 00:04:55.629 INFO Fetch successful Jan 17 00:04:55.645944 coreos-metadata[913]: Jan 17 00:04:55.645 INFO wrote hostname ci-4081.3.6-n-4c16a83c6c to /sysroot/etc/hostname Jan 17 00:04:55.654099 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:04:55.741594 initrd-setup-root[926]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:04:55.778871 initrd-setup-root[933]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:04:55.804162 initrd-setup-root[940]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:04:55.812730 initrd-setup-root[947]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:04:56.732027 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:04:56.746283 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:04:56.756752 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:04:56.773521 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:04:56.778374 kernel: BTRFS info (device sda6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:04:56.796063 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:04:56.808573 ignition[1016]: INFO : Ignition 2.19.0 Jan 17 00:04:56.813927 ignition[1016]: INFO : Stage: mount Jan 17 00:04:56.813927 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:04:56.813927 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:04:56.813927 ignition[1016]: INFO : mount: mount passed Jan 17 00:04:56.813927 ignition[1016]: INFO : Ignition finished successfully Jan 17 00:04:56.815129 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:04:56.835237 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:04:56.857185 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:04:56.879180 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1027) Jan 17 00:04:56.879231 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:04:56.889656 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:04:56.893159 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:04:56.900056 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:04:56.902437 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:04:56.930743 ignition[1044]: INFO : Ignition 2.19.0 Jan 17 00:04:56.930743 ignition[1044]: INFO : Stage: files Jan 17 00:04:56.937803 ignition[1044]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:04:56.937803 ignition[1044]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:04:56.937803 ignition[1044]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:04:56.937803 ignition[1044]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:04:56.937803 ignition[1044]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:04:56.976498 ignition[1044]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:04:56.982914 ignition[1044]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:04:56.982914 ignition[1044]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:04:56.976898 unknown[1044]: wrote ssh authorized keys file for user: core Jan 17 00:04:56.999530 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 17 00:04:56.999530 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 17 00:05:12.034170 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 00:05:12.118192 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 17 00:05:12.126722 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:05:12.126722 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:05:12.126722 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:05:12.126722 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:05:12.126722 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:05:12.126722 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:05:12.126722 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:05:12.126722 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:05:12.126722 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:05:12.126722 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:05:12.126722 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 17 00:05:12.126722 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 17 00:05:12.126722 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 17 00:05:12.126722 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Jan 17 00:05:12.563368 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 00:05:12.841731 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 17 00:05:12.841731 ignition[1044]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 00:05:12.865328 ignition[1044]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:05:12.873907 ignition[1044]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:05:12.873907 ignition[1044]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 00:05:12.873907 ignition[1044]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:05:12.873907 ignition[1044]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:05:12.873907 ignition[1044]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:05:12.873907 ignition[1044]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:05:12.873907 ignition[1044]: INFO : files: files passed Jan 17 00:05:12.873907 ignition[1044]: INFO : Ignition finished successfully Jan 17 00:05:12.875506 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:05:12.905353 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:05:12.920229 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:05:12.936394 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:05:12.936495 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:05:12.968444 initrd-setup-root-after-ignition[1076]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:05:12.965435 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:05:12.990736 initrd-setup-root-after-ignition[1072]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:05:12.990736 initrd-setup-root-after-ignition[1072]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:05:12.974036 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:05:13.005297 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:05:13.039410 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:05:13.039533 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:05:13.049273 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:05:13.059543 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:05:13.068195 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:05:13.080326 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:05:13.097728 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:05:13.114313 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:05:13.129320 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:05:13.134500 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:05:13.144610 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:05:13.153501 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:05:13.153625 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:05:13.166615 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:05:13.171199 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:05:13.180079 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:05:13.189331 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:05:13.198075 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:05:13.207492 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:05:13.217005 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:05:13.227223 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:05:13.236212 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:05:13.245623 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:05:13.253077 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:05:13.253194 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:05:13.264833 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:05:13.269529 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:05:13.278830 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:05:13.278898 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:05:13.288786 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:05:13.288899 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:05:13.302993 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:05:13.303123 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:05:13.308912 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:05:13.309003 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:05:13.317363 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 00:05:13.382368 ignition[1096]: INFO : Ignition 2.19.0 Jan 17 00:05:13.382368 ignition[1096]: INFO : Stage: umount Jan 17 00:05:13.382368 ignition[1096]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:05:13.382368 ignition[1096]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:05:13.317451 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:05:13.421159 ignition[1096]: INFO : umount: umount passed Jan 17 00:05:13.421159 ignition[1096]: INFO : Ignition finished successfully Jan 17 00:05:13.349357 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:05:13.362177 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:05:13.362353 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:05:13.374503 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:05:13.392460 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:05:13.392648 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:05:13.401383 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:05:13.401539 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:05:13.422131 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:05:13.422771 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:05:13.422962 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:05:13.432525 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:05:13.432625 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:05:13.442644 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:05:13.442730 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:05:13.446996 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:05:13.447036 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:05:13.455393 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:05:13.455428 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:05:13.463730 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:05:13.463764 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:05:13.472903 systemd[1]: Stopped target network.target - Network. Jan 17 00:05:13.480432 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:05:13.480476 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:05:13.489547 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:05:13.498488 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:05:13.507194 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:05:13.512750 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:05:13.520197 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:05:13.528280 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:05:13.528324 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:05:13.536513 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:05:13.536553 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:05:13.544994 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:05:13.545036 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:05:13.554221 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:05:13.554252 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:05:13.562907 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:05:13.562943 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:05:13.571265 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:05:13.583433 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:05:13.587081 systemd-networkd[851]: eth0: DHCPv6 lease lost Jan 17 00:05:13.596267 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:05:13.596376 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:05:13.612937 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:05:13.615076 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:05:13.786238 kernel: hv_netvsc 7ced8d78-aef6-7ced-8d78-aef67ced8d78 eth0: Data path switched from VF: enP2086s1 Jan 17 00:05:13.624003 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:05:13.624062 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:05:13.657316 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:05:13.662589 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:05:13.662658 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:05:13.672170 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:05:13.672218 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:05:13.679985 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:05:13.680028 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:05:13.688760 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:05:13.688798 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:05:13.697915 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:05:13.725793 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:05:13.725971 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:05:13.736464 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:05:13.736511 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:05:13.745587 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:05:13.745623 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:05:13.753324 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:05:13.753385 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:05:13.773153 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:05:13.773213 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:05:13.786291 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:05:13.786346 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:05:13.816295 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:05:13.826101 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:05:13.826170 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:05:13.837617 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:05:13.837677 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:05:13.847804 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:05:13.847907 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:05:13.873148 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:05:13.873304 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:05:13.881777 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:05:13.910321 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:05:13.927235 systemd[1]: Switching root. Jan 17 00:05:14.023634 systemd-journald[217]: Journal stopped Jan 17 00:04:49.220496 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 17 00:04:49.220518 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 16 22:28:08 -00 2026 Jan 17 00:04:49.220526 kernel: KASLR enabled Jan 17 00:04:49.220532 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 17 00:04:49.220539 kernel: printk: bootconsole [pl11] enabled Jan 17 00:04:49.220545 kernel: efi: EFI v2.7 by EDK II Jan 17 00:04:49.220552 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3f215018 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 17 00:04:49.220558 kernel: random: crng init done Jan 17 00:04:49.220564 kernel: ACPI: Early table checksum verification disabled Jan 17 00:04:49.220570 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 17 00:04:49.220576 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:04:49.220582 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:04:49.220589 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 17 00:04:49.220595 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:04:49.220603 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:04:49.220609 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:04:49.220616 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:04:49.220624 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:04:49.220630 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:04:49.220636 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 17 00:04:49.220643 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:04:49.220649 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 17 00:04:49.220656 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 17 00:04:49.220662 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 17 00:04:49.220668 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 17 00:04:49.220675 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 17 00:04:49.220681 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 17 00:04:49.220687 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 17 00:04:49.220695 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 17 00:04:49.220701 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 17 00:04:49.220708 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 17 00:04:49.220714 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 17 00:04:49.220720 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 17 00:04:49.220727 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 17 00:04:49.220733 kernel: NUMA: NODE_DATA [mem 0x1bf7ef800-0x1bf7f4fff] Jan 17 00:04:49.220739 kernel: Zone ranges: Jan 17 00:04:49.220745 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 17 00:04:49.220752 kernel: DMA32 empty Jan 17 00:04:49.220758 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 17 00:04:49.220764 kernel: Movable zone start for each node Jan 17 00:04:49.220775 kernel: Early memory node ranges Jan 17 00:04:49.220781 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 17 00:04:49.220788 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 17 00:04:49.220795 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 17 00:04:49.220802 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 17 00:04:49.220810 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 17 00:04:49.220817 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 17 00:04:49.220823 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 17 00:04:49.220831 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 17 00:04:49.220837 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 17 00:04:49.220844 kernel: psci: probing for conduit method from ACPI. Jan 17 00:04:49.220851 kernel: psci: PSCIv1.1 detected in firmware. Jan 17 00:04:49.220857 kernel: psci: Using standard PSCI v0.2 function IDs Jan 17 00:04:49.220864 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 17 00:04:49.220871 kernel: psci: SMC Calling Convention v1.4 Jan 17 00:04:49.220878 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 17 00:04:49.220884 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 17 00:04:49.220892 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 17 00:04:49.220899 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 17 00:04:49.220906 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 17 00:04:49.220913 kernel: Detected PIPT I-cache on CPU0 Jan 17 00:04:49.220919 kernel: CPU features: detected: GIC system register CPU interface Jan 17 00:04:49.220926 kernel: CPU features: detected: Hardware dirty bit management Jan 17 00:04:49.220933 kernel: CPU features: detected: Spectre-BHB Jan 17 00:04:49.220940 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 17 00:04:49.220947 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 17 00:04:49.220953 kernel: CPU features: detected: ARM erratum 1418040 Jan 17 00:04:49.220960 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 17 00:04:49.220968 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 17 00:04:49.220975 kernel: alternatives: applying boot alternatives Jan 17 00:04:49.220983 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 17 00:04:49.220990 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:04:49.220997 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:04:49.221004 kernel: Fallback order for Node 0: 0 Jan 17 00:04:49.221011 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 17 00:04:49.221017 kernel: Policy zone: Normal Jan 17 00:04:49.221024 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:04:49.221031 kernel: software IO TLB: area num 2. Jan 17 00:04:49.221038 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 17 00:04:49.221046 kernel: Memory: 3982636K/4194160K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 211524K reserved, 0K cma-reserved) Jan 17 00:04:49.221146 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:04:49.221153 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:04:49.221161 kernel: rcu: RCU event tracing is enabled. Jan 17 00:04:49.221168 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:04:49.221175 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:04:49.221181 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:04:49.221188 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:04:49.221195 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:04:49.221202 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 17 00:04:49.221209 kernel: GICv3: 960 SPIs implemented Jan 17 00:04:49.221218 kernel: GICv3: 0 Extended SPIs implemented Jan 17 00:04:49.221225 kernel: Root IRQ handler: gic_handle_irq Jan 17 00:04:49.221231 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 17 00:04:49.221238 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 17 00:04:49.221245 kernel: ITS: No ITS available, not enabling LPIs Jan 17 00:04:49.221252 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:04:49.221259 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 00:04:49.221266 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 17 00:04:49.221273 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 17 00:04:49.221280 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 17 00:04:49.221286 kernel: Console: colour dummy device 80x25 Jan 17 00:04:49.221295 kernel: printk: console [tty1] enabled Jan 17 00:04:49.221302 kernel: ACPI: Core revision 20230628 Jan 17 00:04:49.221309 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 17 00:04:49.221316 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:04:49.221323 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:04:49.221330 kernel: landlock: Up and running. Jan 17 00:04:49.221337 kernel: SELinux: Initializing. Jan 17 00:04:49.221345 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:04:49.221351 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:04:49.221360 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:04:49.221367 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:04:49.221374 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0x100000e, misc 0x31e1 Jan 17 00:04:49.221381 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 17 00:04:49.221388 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 17 00:04:49.221395 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:04:49.221402 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:04:49.221410 kernel: Remapping and enabling EFI services. Jan 17 00:04:49.221422 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:04:49.221430 kernel: Detected PIPT I-cache on CPU1 Jan 17 00:04:49.221437 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 17 00:04:49.221445 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 00:04:49.221454 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 17 00:04:49.221461 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:04:49.221468 kernel: SMP: Total of 2 processors activated. Jan 17 00:04:49.221476 kernel: CPU features: detected: 32-bit EL0 Support Jan 17 00:04:49.221483 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 17 00:04:49.221492 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 17 00:04:49.221499 kernel: CPU features: detected: CRC32 instructions Jan 17 00:04:49.221507 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 17 00:04:49.221514 kernel: CPU features: detected: LSE atomic instructions Jan 17 00:04:49.221521 kernel: CPU features: detected: Privileged Access Never Jan 17 00:04:49.221529 kernel: CPU: All CPU(s) started at EL1 Jan 17 00:04:49.221536 kernel: alternatives: applying system-wide alternatives Jan 17 00:04:49.221543 kernel: devtmpfs: initialized Jan 17 00:04:49.221551 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:04:49.221559 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:04:49.221567 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:04:49.221574 kernel: SMBIOS 3.1.0 present. Jan 17 00:04:49.221581 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 17 00:04:49.221589 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:04:49.221596 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 17 00:04:49.221604 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 17 00:04:49.221612 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 17 00:04:49.221619 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:04:49.221627 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 17 00:04:49.221635 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:04:49.221642 kernel: cpuidle: using governor menu Jan 17 00:04:49.221649 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 17 00:04:49.221657 kernel: ASID allocator initialised with 32768 entries Jan 17 00:04:49.221664 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:04:49.221671 kernel: Serial: AMBA PL011 UART driver Jan 17 00:04:49.221678 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 17 00:04:49.221686 kernel: Modules: 0 pages in range for non-PLT usage Jan 17 00:04:49.221695 kernel: Modules: 509008 pages in range for PLT usage Jan 17 00:04:49.221702 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:04:49.221710 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:04:49.221717 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 17 00:04:49.221724 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 17 00:04:49.221732 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:04:49.221739 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:04:49.221746 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 17 00:04:49.221754 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 17 00:04:49.221762 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:04:49.221769 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:04:49.221777 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:04:49.221784 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:04:49.221791 kernel: ACPI: Interpreter enabled Jan 17 00:04:49.221799 kernel: ACPI: Using GIC for interrupt routing Jan 17 00:04:49.221806 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 17 00:04:49.221813 kernel: printk: console [ttyAMA0] enabled Jan 17 00:04:49.221820 kernel: printk: bootconsole [pl11] disabled Jan 17 00:04:49.221829 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 17 00:04:49.221837 kernel: iommu: Default domain type: Translated Jan 17 00:04:49.221844 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 17 00:04:49.221851 kernel: efivars: Registered efivars operations Jan 17 00:04:49.221859 kernel: vgaarb: loaded Jan 17 00:04:49.221866 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 17 00:04:49.221873 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:04:49.221880 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:04:49.221888 kernel: pnp: PnP ACPI init Jan 17 00:04:49.221896 kernel: pnp: PnP ACPI: found 0 devices Jan 17 00:04:49.221904 kernel: NET: Registered PF_INET protocol family Jan 17 00:04:49.221911 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:04:49.221918 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 00:04:49.221926 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:04:49.221933 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:04:49.221941 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 00:04:49.221948 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 00:04:49.221955 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:04:49.221964 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:04:49.221971 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:04:49.221978 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:04:49.221986 kernel: kvm [1]: HYP mode not available Jan 17 00:04:49.221993 kernel: Initialise system trusted keyrings Jan 17 00:04:49.222001 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 00:04:49.222008 kernel: Key type asymmetric registered Jan 17 00:04:49.222015 kernel: Asymmetric key parser 'x509' registered Jan 17 00:04:49.222022 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 17 00:04:49.222031 kernel: io scheduler mq-deadline registered Jan 17 00:04:49.222038 kernel: io scheduler kyber registered Jan 17 00:04:49.222046 kernel: io scheduler bfq registered Jan 17 00:04:49.222057 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:04:49.222065 kernel: thunder_xcv, ver 1.0 Jan 17 00:04:49.222072 kernel: thunder_bgx, ver 1.0 Jan 17 00:04:49.222079 kernel: nicpf, ver 1.0 Jan 17 00:04:49.222087 kernel: nicvf, ver 1.0 Jan 17 00:04:49.222220 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 17 00:04:49.222295 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-17T00:04:48 UTC (1768608288) Jan 17 00:04:49.222305 kernel: efifb: probing for efifb Jan 17 00:04:49.222312 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 17 00:04:49.222320 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 17 00:04:49.222327 kernel: efifb: scrolling: redraw Jan 17 00:04:49.222334 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 17 00:04:49.222342 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 00:04:49.222349 kernel: fb0: EFI VGA frame buffer device Jan 17 00:04:49.222358 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 17 00:04:49.222366 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 00:04:49.222373 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 6 counters available Jan 17 00:04:49.222380 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 17 00:04:49.222388 kernel: watchdog: Hard watchdog permanently disabled Jan 17 00:04:49.222395 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:04:49.222402 kernel: Segment Routing with IPv6 Jan 17 00:04:49.222410 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:04:49.222417 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:04:49.222425 kernel: Key type dns_resolver registered Jan 17 00:04:49.222433 kernel: registered taskstats version 1 Jan 17 00:04:49.222440 kernel: Loading compiled-in X.509 certificates Jan 17 00:04:49.222448 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 0aabad27df82424bfffc9b1a502a9ae84b35bad4' Jan 17 00:04:49.222458 kernel: Key type .fscrypt registered Jan 17 00:04:49.222467 kernel: Key type fscrypt-provisioning registered Jan 17 00:04:49.222475 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:04:49.222484 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:04:49.222492 kernel: ima: No architecture policies found Jan 17 00:04:49.222503 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 17 00:04:49.222511 kernel: clk: Disabling unused clocks Jan 17 00:04:49.222520 kernel: Freeing unused kernel memory: 39424K Jan 17 00:04:49.222528 kernel: Run /init as init process Jan 17 00:04:49.222537 kernel: with arguments: Jan 17 00:04:49.222544 kernel: /init Jan 17 00:04:49.222553 kernel: with environment: Jan 17 00:04:49.222561 kernel: HOME=/ Jan 17 00:04:49.222570 kernel: TERM=linux Jan 17 00:04:49.222580 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:04:49.222593 systemd[1]: Detected virtualization microsoft. Jan 17 00:04:49.222602 systemd[1]: Detected architecture arm64. Jan 17 00:04:49.222611 systemd[1]: Running in initrd. Jan 17 00:04:49.222619 systemd[1]: No hostname configured, using default hostname. Jan 17 00:04:49.222627 systemd[1]: Hostname set to . Jan 17 00:04:49.222635 systemd[1]: Initializing machine ID from random generator. Jan 17 00:04:49.222644 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:04:49.222653 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:04:49.222662 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:04:49.222672 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:04:49.222682 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:04:49.222691 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:04:49.222700 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:04:49.222711 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:04:49.222723 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:04:49.222732 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:04:49.222742 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:04:49.222750 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:04:49.222758 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:04:49.222766 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:04:49.222773 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:04:49.222782 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:04:49.222791 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:04:49.222799 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:04:49.222807 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:04:49.222815 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:04:49.222823 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:04:49.222831 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:04:49.222839 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:04:49.222847 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:04:49.222857 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:04:49.222865 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:04:49.222873 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:04:49.222880 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:04:49.222888 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:04:49.222912 systemd-journald[217]: Collecting audit messages is disabled. Jan 17 00:04:49.222933 systemd-journald[217]: Journal started Jan 17 00:04:49.222951 systemd-journald[217]: Runtime Journal (/run/log/journal/8b97ac4fb1a64c74af168cf50a3e0caf) is 8.0M, max 78.5M, 70.5M free. Jan 17 00:04:49.232279 systemd-modules-load[218]: Inserted module 'overlay' Jan 17 00:04:49.244424 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:04:49.253064 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:04:49.257066 kernel: Bridge firewalling registered Jan 17 00:04:49.257106 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:04:49.256486 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 17 00:04:49.268942 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:04:49.278286 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:04:49.284246 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:04:49.288477 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:04:49.302844 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:04:49.325312 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:04:49.332206 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:04:49.362289 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:04:49.369696 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:04:49.386947 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:04:49.403140 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:04:49.410062 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:04:49.418849 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:04:49.444314 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:04:49.456203 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:04:49.470767 dracut-cmdline[251]: dracut-dracut-053 Jan 17 00:04:49.470767 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 17 00:04:49.474036 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:04:49.510279 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:04:49.527346 systemd-resolved[255]: Positive Trust Anchors: Jan 17 00:04:49.527355 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:04:49.527387 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:04:49.529511 systemd-resolved[255]: Defaulting to hostname 'linux'. Jan 17 00:04:49.534671 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:04:49.539797 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:04:49.624075 kernel: SCSI subsystem initialized Jan 17 00:04:49.633061 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:04:49.642078 kernel: iscsi: registered transport (tcp) Jan 17 00:04:49.658591 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:04:49.658647 kernel: QLogic iSCSI HBA Driver Jan 17 00:04:49.703415 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:04:49.715536 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:04:49.742963 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:04:49.742995 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:04:49.748275 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:04:49.799068 kernel: raid6: neonx8 gen() 15788 MB/s Jan 17 00:04:49.813059 kernel: raid6: neonx4 gen() 15104 MB/s Jan 17 00:04:49.832057 kernel: raid6: neonx2 gen() 13223 MB/s Jan 17 00:04:49.852057 kernel: raid6: neonx1 gen() 10543 MB/s Jan 17 00:04:49.871082 kernel: raid6: int64x8 gen() 6972 MB/s Jan 17 00:04:49.890059 kernel: raid6: int64x4 gen() 7365 MB/s Jan 17 00:04:49.910058 kernel: raid6: int64x2 gen() 6145 MB/s Jan 17 00:04:49.932713 kernel: raid6: int64x1 gen() 5071 MB/s Jan 17 00:04:49.932724 kernel: raid6: using algorithm neonx8 gen() 15788 MB/s Jan 17 00:04:49.954594 kernel: raid6: .... xor() 12043 MB/s, rmw enabled Jan 17 00:04:49.954604 kernel: raid6: using neon recovery algorithm Jan 17 00:04:49.964369 kernel: xor: measuring software checksum speed Jan 17 00:04:49.964384 kernel: 8regs : 19812 MB/sec Jan 17 00:04:49.968033 kernel: 32regs : 19664 MB/sec Jan 17 00:04:49.970788 kernel: arm64_neon : 27061 MB/sec Jan 17 00:04:49.973967 kernel: xor: using function: arm64_neon (27061 MB/sec) Jan 17 00:04:50.024079 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:04:50.034315 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:04:50.049217 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:04:50.068730 systemd-udevd[437]: Using default interface naming scheme 'v255'. Jan 17 00:04:50.073028 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:04:50.094186 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:04:50.109342 dracut-pre-trigger[450]: rd.md=0: removing MD RAID activation Jan 17 00:04:50.137791 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:04:50.153514 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:04:50.192083 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:04:50.206257 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:04:50.227307 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:04:50.239232 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:04:50.250986 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:04:50.261951 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:04:50.277195 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:04:50.296021 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:04:50.321234 kernel: hv_vmbus: Vmbus version:5.3 Jan 17 00:04:50.321258 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 17 00:04:50.296202 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:04:50.344354 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 17 00:04:50.302449 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:04:50.310099 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:04:50.389613 kernel: hv_vmbus: registering driver hid_hyperv Jan 17 00:04:50.389634 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 17 00:04:50.389644 kernel: hv_vmbus: registering driver hv_storvsc Jan 17 00:04:50.389653 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 17 00:04:50.310292 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:04:50.426381 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 17 00:04:50.426405 kernel: scsi host0: storvsc_host_t Jan 17 00:04:50.426571 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 17 00:04:50.426670 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 17 00:04:50.426761 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 17 00:04:50.426858 kernel: hv_vmbus: registering driver hv_netvsc Jan 17 00:04:50.333207 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:04:50.439591 kernel: scsi host1: storvsc_host_t Jan 17 00:04:50.362450 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:04:50.376338 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:04:50.411682 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:04:50.450812 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:04:50.482350 kernel: PTP clock support registered Jan 17 00:04:50.488983 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:04:50.506093 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 17 00:04:50.506282 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 00:04:50.506294 kernel: hv_utils: Registering HyperV Utility Driver Jan 17 00:04:50.511637 kernel: hv_vmbus: registering driver hv_utils Jan 17 00:04:50.517824 kernel: hv_utils: Heartbeat IC version 3.0 Jan 17 00:04:50.517869 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 17 00:04:50.520726 kernel: hv_utils: Shutdown IC version 3.2 Jan 17 00:04:50.520742 kernel: hv_utils: TimeSync IC version 4.0 Jan 17 00:04:50.441874 systemd-resolved[255]: Clock change detected. Flushing caches. Jan 17 00:04:50.468969 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 17 00:04:50.472208 kernel: hv_netvsc 7ced8d78-aef6-7ced-8d78-aef67ced8d78 eth0: VF slot 1 added Jan 17 00:04:50.472320 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 17 00:04:50.472417 systemd-journald[217]: Time jumped backwards, rotating. Jan 17 00:04:50.472456 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 17 00:04:50.472544 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 17 00:04:50.472658 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 17 00:04:50.481778 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:04:50.481821 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 17 00:04:50.489214 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#222 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 17 00:04:50.499022 kernel: hv_vmbus: registering driver hv_pci Jan 17 00:04:50.499082 kernel: hv_pci 10c3d7b9-0826-48c1-bed0-48767d5790dd: PCI VMBus probing: Using version 0x10004 Jan 17 00:04:50.501736 kernel: hv_pci 10c3d7b9-0826-48c1-bed0-48767d5790dd: PCI host bridge to bus 0826:00 Jan 17 00:04:50.510493 kernel: pci_bus 0826:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 17 00:04:50.512096 kernel: pci_bus 0826:00: No busn resource found for root bus, will use [bus 00-ff] Jan 17 00:04:50.526221 kernel: pci 0826:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 17 00:04:50.532110 kernel: pci 0826:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 17 00:04:50.539209 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#205 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 17 00:04:50.539393 kernel: pci 0826:00:02.0: enabling Extended Tags Jan 17 00:04:50.558098 kernel: pci 0826:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 0826:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 17 00:04:50.568880 kernel: pci_bus 0826:00: busn_res: [bus 00-ff] end is updated to 00 Jan 17 00:04:50.569052 kernel: pci 0826:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 17 00:04:50.608749 kernel: mlx5_core 0826:00:02.0: enabling device (0000 -> 0002) Jan 17 00:04:50.615055 kernel: mlx5_core 0826:00:02.0: firmware version: 16.30.5026 Jan 17 00:04:50.812065 kernel: hv_netvsc 7ced8d78-aef6-7ced-8d78-aef67ced8d78 eth0: VF registering: eth1 Jan 17 00:04:50.812271 kernel: mlx5_core 0826:00:02.0 eth1: joined to eth0 Jan 17 00:04:50.823094 kernel: mlx5_core 0826:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 17 00:04:50.833059 kernel: mlx5_core 0826:00:02.0 enP2086s1: renamed from eth1 Jan 17 00:04:50.986222 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 17 00:04:51.016052 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 17 00:04:51.046386 kernel: BTRFS: device fsid 257557f7-4bf9-4b29-86df-93ad67770d31 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (489) Jan 17 00:04:51.059811 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 17 00:04:51.065837 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 17 00:04:51.093180 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (494) Jan 17 00:04:51.094290 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:04:51.116664 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 17 00:04:52.138109 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:04:52.138715 disk-uuid[608]: The operation has completed successfully. Jan 17 00:04:52.213678 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:04:52.215082 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:04:52.243195 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:04:52.253268 sh[667]: Success Jan 17 00:04:52.281207 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 17 00:04:52.562467 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:04:52.571182 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:04:52.577068 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:04:52.608606 kernel: BTRFS info (device dm-0): first mount of filesystem 257557f7-4bf9-4b29-86df-93ad67770d31 Jan 17 00:04:52.608647 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:04:52.614143 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:04:52.618127 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:04:52.621663 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:04:52.969741 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:04:52.974163 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:04:52.991209 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:04:52.997416 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:04:53.031072 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:04:53.031125 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:04:53.034863 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:04:53.073070 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:04:53.081547 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:04:53.092120 kernel: BTRFS info (device sda6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:04:53.100085 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:04:53.114275 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:04:53.119589 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:04:53.133213 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:04:53.168073 systemd-networkd[851]: lo: Link UP Jan 17 00:04:53.169081 systemd-networkd[851]: lo: Gained carrier Jan 17 00:04:53.170681 systemd-networkd[851]: Enumeration completed Jan 17 00:04:53.170883 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:04:53.171660 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:04:53.171664 systemd-networkd[851]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:04:53.179204 systemd[1]: Reached target network.target - Network. Jan 17 00:04:53.230063 kernel: mlx5_core 0826:00:02.0 enP2086s1: Link up Jan 17 00:04:53.265055 kernel: hv_netvsc 7ced8d78-aef6-7ced-8d78-aef67ced8d78 eth0: Data path switched to VF: enP2086s1 Jan 17 00:04:53.265611 systemd-networkd[851]: enP2086s1: Link UP Jan 17 00:04:53.265695 systemd-networkd[851]: eth0: Link UP Jan 17 00:04:53.265795 systemd-networkd[851]: eth0: Gained carrier Jan 17 00:04:53.265803 systemd-networkd[851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:04:53.285811 systemd-networkd[851]: enP2086s1: Gained carrier Jan 17 00:04:53.296081 systemd-networkd[851]: eth0: DHCPv4 address 10.200.20.43/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 17 00:04:54.325345 ignition[849]: Ignition 2.19.0 Jan 17 00:04:54.325356 ignition[849]: Stage: fetch-offline Jan 17 00:04:54.328752 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:04:54.325394 ignition[849]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:04:54.325402 ignition[849]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:04:54.325504 ignition[849]: parsed url from cmdline: "" Jan 17 00:04:54.325507 ignition[849]: no config URL provided Jan 17 00:04:54.325511 ignition[849]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:04:54.351403 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:04:54.325518 ignition[849]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:04:54.325523 ignition[849]: failed to fetch config: resource requires networking Jan 17 00:04:54.325687 ignition[849]: Ignition finished successfully Jan 17 00:04:54.372844 ignition[863]: Ignition 2.19.0 Jan 17 00:04:54.372851 ignition[863]: Stage: fetch Jan 17 00:04:54.373014 ignition[863]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:04:54.373023 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:04:54.373120 ignition[863]: parsed url from cmdline: "" Jan 17 00:04:54.373124 ignition[863]: no config URL provided Jan 17 00:04:54.373128 ignition[863]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:04:54.373135 ignition[863]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:04:54.373155 ignition[863]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 17 00:04:54.479210 ignition[863]: GET result: OK Jan 17 00:04:54.481962 ignition[863]: config has been read from IMDS userdata Jan 17 00:04:54.482007 ignition[863]: parsing config with SHA512: 3334ccb7f714b75e56ffacb24b2d7b09fd650ef22554e6895880b246bf99eb9053292402cf797ec8d7ab050fb14d724b2e898b5b2beff7ff797206288c770847 Jan 17 00:04:54.487262 unknown[863]: fetched base config from "system" Jan 17 00:04:54.487275 unknown[863]: fetched base config from "system" Jan 17 00:04:54.487801 ignition[863]: fetch: fetch complete Jan 17 00:04:54.487280 unknown[863]: fetched user config from "azure" Jan 17 00:04:54.487809 ignition[863]: fetch: fetch passed Jan 17 00:04:54.494027 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:04:54.487850 ignition[863]: Ignition finished successfully Jan 17 00:04:54.507250 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:04:54.524667 ignition[869]: Ignition 2.19.0 Jan 17 00:04:54.524679 ignition[869]: Stage: kargs Jan 17 00:04:54.528579 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:04:54.524848 ignition[869]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:04:54.524859 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:04:54.545204 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:04:54.525945 ignition[869]: kargs: kargs passed Jan 17 00:04:54.525992 ignition[869]: Ignition finished successfully Jan 17 00:04:54.566176 ignition[876]: Ignition 2.19.0 Jan 17 00:04:54.566188 ignition[876]: Stage: disks Jan 17 00:04:54.570330 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:04:54.566356 ignition[876]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:04:54.566366 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:04:54.580887 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:04:54.567235 ignition[876]: disks: disks passed Jan 17 00:04:54.588963 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:04:54.567279 ignition[876]: Ignition finished successfully Jan 17 00:04:54.598751 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:04:54.607569 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:04:54.614794 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:04:54.636312 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:04:54.645402 systemd-networkd[851]: eth0: Gained IPv6LL Jan 17 00:04:54.709078 systemd-fsck[885]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 17 00:04:54.716840 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:04:54.730261 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:04:54.784124 kernel: EXT4-fs (sda9): mounted filesystem b70ce012-b356-4603-a688-ee0b3b7de551 r/w with ordered data mode. Quota mode: none. Jan 17 00:04:54.783997 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:04:54.788155 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:04:54.834120 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:04:54.852070 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (896) Jan 17 00:04:54.862014 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:04:54.862041 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:04:54.866748 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:04:54.868225 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:04:54.880066 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:04:54.882218 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 00:04:54.892833 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:04:54.893147 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:04:54.909491 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:04:54.917021 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:04:54.931219 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:04:55.606114 coreos-metadata[913]: Jan 17 00:04:55.606 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 17 00:04:55.614628 coreos-metadata[913]: Jan 17 00:04:55.614 INFO Fetch successful Jan 17 00:04:55.618862 coreos-metadata[913]: Jan 17 00:04:55.617 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 17 00:04:55.629559 coreos-metadata[913]: Jan 17 00:04:55.629 INFO Fetch successful Jan 17 00:04:55.645944 coreos-metadata[913]: Jan 17 00:04:55.645 INFO wrote hostname ci-4081.3.6-n-4c16a83c6c to /sysroot/etc/hostname Jan 17 00:04:55.654099 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:04:55.741594 initrd-setup-root[926]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:04:55.778871 initrd-setup-root[933]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:04:55.804162 initrd-setup-root[940]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:04:55.812730 initrd-setup-root[947]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:04:56.732027 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:04:56.746283 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:04:56.756752 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:04:56.773521 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:04:56.778374 kernel: BTRFS info (device sda6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:04:56.796063 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:04:56.808573 ignition[1016]: INFO : Ignition 2.19.0 Jan 17 00:04:56.813927 ignition[1016]: INFO : Stage: mount Jan 17 00:04:56.813927 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:04:56.813927 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:04:56.813927 ignition[1016]: INFO : mount: mount passed Jan 17 00:04:56.813927 ignition[1016]: INFO : Ignition finished successfully Jan 17 00:04:56.815129 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:04:56.835237 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:04:56.857185 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:04:56.879180 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1027) Jan 17 00:04:56.879231 kernel: BTRFS info (device sda6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:04:56.889656 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:04:56.893159 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:04:56.900056 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:04:56.902437 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:04:56.930743 ignition[1044]: INFO : Ignition 2.19.0 Jan 17 00:04:56.930743 ignition[1044]: INFO : Stage: files Jan 17 00:04:56.937803 ignition[1044]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:04:56.937803 ignition[1044]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:04:56.937803 ignition[1044]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:04:56.937803 ignition[1044]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:04:56.937803 ignition[1044]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:04:56.976498 ignition[1044]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:04:56.982914 ignition[1044]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:04:56.982914 ignition[1044]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:04:56.976898 unknown[1044]: wrote ssh authorized keys file for user: core Jan 17 00:04:56.999530 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 17 00:04:56.999530 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 17 00:05:12.034170 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 00:05:12.118192 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 17 00:05:12.126722 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:05:12.126722 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:05:12.126722 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:05:12.126722 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:05:12.126722 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:05:12.126722 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:05:12.126722 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:05:12.126722 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:05:12.126722 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:05:12.126722 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:05:12.126722 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 17 00:05:12.126722 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 17 00:05:12.126722 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 17 00:05:12.126722 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Jan 17 00:05:12.563368 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 00:05:12.841731 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 17 00:05:12.841731 ignition[1044]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 00:05:12.865328 ignition[1044]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:05:12.873907 ignition[1044]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:05:12.873907 ignition[1044]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 00:05:12.873907 ignition[1044]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:05:12.873907 ignition[1044]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:05:12.873907 ignition[1044]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:05:12.873907 ignition[1044]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:05:12.873907 ignition[1044]: INFO : files: files passed Jan 17 00:05:12.873907 ignition[1044]: INFO : Ignition finished successfully Jan 17 00:05:12.875506 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:05:12.905353 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:05:12.920229 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:05:12.936394 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:05:12.936495 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:05:12.968444 initrd-setup-root-after-ignition[1076]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:05:12.965435 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:05:12.990736 initrd-setup-root-after-ignition[1072]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:05:12.990736 initrd-setup-root-after-ignition[1072]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:05:12.974036 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:05:13.005297 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:05:13.039410 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:05:13.039533 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:05:13.049273 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:05:13.059543 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:05:13.068195 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:05:13.080326 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:05:13.097728 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:05:13.114313 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:05:13.129320 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:05:13.134500 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:05:13.144610 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:05:13.153501 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:05:13.153625 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:05:13.166615 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:05:13.171199 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:05:13.180079 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:05:13.189331 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:05:13.198075 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:05:13.207492 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:05:13.217005 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:05:13.227223 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:05:13.236212 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:05:13.245623 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:05:13.253077 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:05:13.253194 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:05:13.264833 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:05:13.269529 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:05:13.278830 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:05:13.278898 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:05:13.288786 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:05:13.288899 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:05:13.302993 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:05:13.303123 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:05:13.308912 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:05:13.309003 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:05:13.317363 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 00:05:13.382368 ignition[1096]: INFO : Ignition 2.19.0 Jan 17 00:05:13.382368 ignition[1096]: INFO : Stage: umount Jan 17 00:05:13.382368 ignition[1096]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:05:13.382368 ignition[1096]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:05:13.317451 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:05:13.421159 ignition[1096]: INFO : umount: umount passed Jan 17 00:05:13.421159 ignition[1096]: INFO : Ignition finished successfully Jan 17 00:05:13.349357 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:05:13.362177 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:05:13.362353 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:05:13.374503 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:05:13.392460 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:05:13.392648 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:05:13.401383 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:05:13.401539 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:05:13.422131 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:05:13.422771 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:05:13.422962 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:05:13.432525 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:05:13.432625 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:05:13.442644 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:05:13.442730 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:05:13.446996 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:05:13.447036 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:05:13.455393 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:05:13.455428 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:05:13.463730 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:05:13.463764 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:05:13.472903 systemd[1]: Stopped target network.target - Network. Jan 17 00:05:13.480432 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:05:13.480476 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:05:13.489547 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:05:13.498488 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:05:13.507194 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:05:13.512750 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:05:13.520197 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:05:13.528280 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:05:13.528324 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:05:13.536513 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:05:13.536553 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:05:13.544994 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:05:13.545036 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:05:13.554221 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:05:13.554252 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:05:13.562907 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:05:13.562943 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:05:13.571265 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:05:13.583433 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:05:13.587081 systemd-networkd[851]: eth0: DHCPv6 lease lost Jan 17 00:05:13.596267 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:05:13.596376 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:05:13.612937 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:05:13.615076 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:05:13.786238 kernel: hv_netvsc 7ced8d78-aef6-7ced-8d78-aef67ced8d78 eth0: Data path switched from VF: enP2086s1 Jan 17 00:05:13.624003 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:05:13.624062 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:05:13.657316 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:05:13.662589 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:05:13.662658 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:05:13.672170 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:05:13.672218 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:05:13.679985 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:05:13.680028 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:05:13.688760 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:05:13.688798 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:05:13.697915 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:05:13.725793 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:05:13.725971 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:05:13.736464 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:05:13.736511 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:05:13.745587 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:05:13.745623 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:05:13.753324 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:05:13.753385 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:05:13.773153 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:05:13.773213 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:05:13.786291 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:05:13.786346 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:05:13.816295 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:05:13.826101 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:05:13.826170 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:05:13.837617 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:05:13.837677 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:05:13.847804 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:05:13.847907 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:05:13.873148 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:05:13.873304 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:05:13.881777 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:05:13.910321 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:05:13.927235 systemd[1]: Switching root. Jan 17 00:05:14.023634 systemd-journald[217]: Journal stopped Jan 17 00:05:23.565778 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jan 17 00:05:23.565803 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:05:23.565814 kernel: SELinux: policy capability open_perms=1 Jan 17 00:05:23.565824 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:05:23.565832 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:05:23.565839 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:05:23.565848 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:05:23.565856 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:05:23.565865 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:05:23.565874 systemd[1]: Successfully loaded SELinux policy in 170.628ms. Jan 17 00:05:23.565885 kernel: audit: type=1403 audit(1768608320.008:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:05:23.565894 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.150ms. Jan 17 00:05:23.565904 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:05:23.565913 systemd[1]: Detected virtualization microsoft. Jan 17 00:05:23.565922 systemd[1]: Detected architecture arm64. Jan 17 00:05:23.565933 systemd[1]: Detected first boot. Jan 17 00:05:23.565943 systemd[1]: Hostname set to . Jan 17 00:05:23.565952 systemd[1]: Initializing machine ID from random generator. Jan 17 00:05:23.565961 zram_generator::config[1137]: No configuration found. Jan 17 00:05:23.565971 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:05:23.565980 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 00:05:23.565990 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 00:05:23.566000 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 00:05:23.566010 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:05:23.566019 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:05:23.566028 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:05:23.566038 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:05:23.566064 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:05:23.566077 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:05:23.566087 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:05:23.566097 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:05:23.566107 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:05:23.566116 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:05:23.566125 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:05:23.566149 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:05:23.566158 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:05:23.566167 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:05:23.566178 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 17 00:05:23.566188 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:05:23.566197 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 00:05:23.566209 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 00:05:23.566218 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 00:05:23.566228 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:05:23.566238 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:05:23.566249 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:05:23.566259 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:05:23.566268 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:05:23.566278 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:05:23.566287 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:05:23.566297 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:05:23.566306 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:05:23.566319 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:05:23.566329 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:05:23.566339 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:05:23.566348 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:05:23.566358 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:05:23.566367 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:05:23.566379 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:05:23.566388 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:05:23.566398 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:05:23.566408 systemd[1]: Reached target machines.target - Containers. Jan 17 00:05:23.566418 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:05:23.566427 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:05:23.566437 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:05:23.566447 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:05:23.566458 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:05:23.566468 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:05:23.566479 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:05:23.566489 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:05:23.566498 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:05:23.566508 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:05:23.566518 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 00:05:23.566529 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 00:05:23.566538 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 00:05:23.566549 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 00:05:23.566559 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:05:23.566568 kernel: fuse: init (API version 7.39) Jan 17 00:05:23.566577 kernel: loop: module loaded Jan 17 00:05:23.566585 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:05:23.566595 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:05:23.566604 kernel: ACPI: bus type drm_connector registered Jan 17 00:05:23.566613 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:05:23.566638 systemd-journald[1226]: Collecting audit messages is disabled. Jan 17 00:05:23.566660 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:05:23.566670 systemd-journald[1226]: Journal started Jan 17 00:05:23.566691 systemd-journald[1226]: Runtime Journal (/run/log/journal/a61a97242db7449eb24071f09ac3175d) is 8.0M, max 78.5M, 70.5M free. Jan 17 00:05:22.683151 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:05:22.834858 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 17 00:05:22.835245 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 00:05:22.835602 systemd[1]: systemd-journald.service: Consumed 2.464s CPU time. Jan 17 00:05:23.582787 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 00:05:23.582844 systemd[1]: Stopped verity-setup.service. Jan 17 00:05:23.593386 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:05:23.594389 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:05:23.599116 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:05:23.604269 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:05:23.608569 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:05:23.613348 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:05:23.619369 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:05:23.623713 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:05:23.629233 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:05:23.635254 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:05:23.635395 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:05:23.640867 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:05:23.640990 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:05:23.646290 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:05:23.646427 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:05:23.651574 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:05:23.651701 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:05:23.657525 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:05:23.657644 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:05:23.662561 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:05:23.662697 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:05:23.667710 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:05:23.672887 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:05:23.678565 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:05:23.684724 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:05:23.701350 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:05:23.714147 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:05:23.719932 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:05:23.724910 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:05:23.724947 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:05:23.730632 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:05:23.737339 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:05:23.743568 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:05:23.748240 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:05:23.749861 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:05:23.756234 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:05:23.762166 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:05:23.763335 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:05:23.770281 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:05:23.772276 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:05:23.782626 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:05:23.791242 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:05:23.797230 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:05:23.806759 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:05:23.811759 systemd-journald[1226]: Time spent on flushing to /var/log/journal/a61a97242db7449eb24071f09ac3175d is 1.108850s for 889 entries. Jan 17 00:05:23.811759 systemd-journald[1226]: System Journal (/var/log/journal/a61a97242db7449eb24071f09ac3175d) is 11.8M, max 2.6G, 2.6G free. Jan 17 00:05:27.174368 systemd-journald[1226]: Received client request to flush runtime journal. Jan 17 00:05:27.174458 systemd-journald[1226]: /var/log/journal/a61a97242db7449eb24071f09ac3175d/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jan 17 00:05:27.174487 systemd-journald[1226]: Rotating system journal. Jan 17 00:05:27.174516 kernel: loop0: detected capacity change from 0 to 31320 Jan 17 00:05:27.174539 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:05:27.174555 kernel: loop1: detected capacity change from 0 to 200800 Jan 17 00:05:27.174570 kernel: loop2: detected capacity change from 0 to 114328 Jan 17 00:05:23.820532 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:05:23.826445 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:05:23.832701 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:05:23.860901 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:05:23.878412 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:05:23.884397 udevadm[1274]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 00:05:23.896545 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:05:24.473134 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:05:24.484282 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:05:25.341505 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Jan 17 00:05:25.341516 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Jan 17 00:05:25.345427 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:05:27.176306 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:05:27.297720 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:05:27.308337 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:05:27.333950 systemd-udevd[1295]: Using default interface naming scheme 'v255'. Jan 17 00:05:27.726964 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:05:27.730094 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:05:27.914598 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:05:27.928426 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:05:27.974444 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 17 00:05:27.989557 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:05:28.088364 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:05:28.148070 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 17 00:05:28.154183 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:05:28.179038 kernel: hv_vmbus: registering driver hv_balloon Jan 17 00:05:28.179143 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 17 00:05:28.182578 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 17 00:05:28.191970 kernel: hv_vmbus: registering driver hyperv_fb Jan 17 00:05:28.192066 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 17 00:05:28.199691 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 17 00:05:28.208210 kernel: Console: switching to colour dummy device 80x25 Jan 17 00:05:28.213173 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 00:05:28.216428 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:05:28.235348 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:05:28.235529 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:05:28.247374 systemd-networkd[1306]: lo: Link UP Jan 17 00:05:28.247383 systemd-networkd[1306]: lo: Gained carrier Jan 17 00:05:28.248739 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:05:28.249755 systemd-networkd[1306]: Enumeration completed Jan 17 00:05:28.250063 systemd-networkd[1306]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:05:28.250066 systemd-networkd[1306]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:05:28.256469 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:05:28.270471 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:05:28.281143 kernel: loop3: detected capacity change from 0 to 114432 Jan 17 00:05:28.311067 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1318) Jan 17 00:05:28.324082 kernel: mlx5_core 0826:00:02.0 enP2086s1: Link up Jan 17 00:05:28.352265 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 17 00:05:28.353119 kernel: hv_netvsc 7ced8d78-aef6-7ced-8d78-aef67ced8d78 eth0: Data path switched to VF: enP2086s1 Jan 17 00:05:28.358768 systemd-networkd[1306]: enP2086s1: Link UP Jan 17 00:05:28.359526 systemd-networkd[1306]: eth0: Link UP Jan 17 00:05:28.359937 systemd-networkd[1306]: eth0: Gained carrier Jan 17 00:05:28.360296 systemd-networkd[1306]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:05:28.364211 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:05:28.365241 systemd-networkd[1306]: enP2086s1: Gained carrier Jan 17 00:05:28.378136 systemd-networkd[1306]: eth0: DHCPv4 address 10.200.20.43/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 17 00:05:28.413881 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:05:28.634644 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:05:28.649016 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:05:28.661684 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:05:28.678068 kernel: loop4: detected capacity change from 0 to 31320 Jan 17 00:05:28.693072 kernel: loop5: detected capacity change from 0 to 200800 Jan 17 00:05:28.710108 kernel: loop6: detected capacity change from 0 to 114328 Jan 17 00:05:28.724073 kernel: loop7: detected capacity change from 0 to 114432 Jan 17 00:05:28.736078 lvm[1396]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:05:28.737095 (sd-merge)[1397]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 17 00:05:28.737515 (sd-merge)[1397]: Merged extensions into '/usr'. Jan 17 00:05:28.749591 systemd[1]: Reloading requested from client PID 1272 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:05:28.749607 systemd[1]: Reloading... Jan 17 00:05:28.819089 zram_generator::config[1429]: No configuration found. Jan 17 00:05:28.944754 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:05:29.017944 systemd[1]: Reloading finished in 268 ms. Jan 17 00:05:29.047244 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:05:29.054171 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:05:29.064849 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:05:29.075181 systemd[1]: Starting ensure-sysext.service... Jan 17 00:05:29.081205 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:05:29.088206 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:05:29.092424 lvm[1483]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:05:29.108753 systemd[1]: Reloading requested from client PID 1482 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:05:29.108780 systemd[1]: Reloading... Jan 17 00:05:29.142870 systemd-tmpfiles[1484]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:05:29.144788 systemd-tmpfiles[1484]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:05:29.145672 systemd-tmpfiles[1484]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:05:29.146550 systemd-tmpfiles[1484]: ACLs are not supported, ignoring. Jan 17 00:05:29.146606 systemd-tmpfiles[1484]: ACLs are not supported, ignoring. Jan 17 00:05:29.151891 systemd-tmpfiles[1484]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:05:29.153193 systemd-tmpfiles[1484]: Skipping /boot Jan 17 00:05:29.165862 systemd-tmpfiles[1484]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:05:29.166000 systemd-tmpfiles[1484]: Skipping /boot Jan 17 00:05:29.205188 zram_generator::config[1514]: No configuration found. Jan 17 00:05:29.304958 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:05:29.382483 systemd[1]: Reloading finished in 273 ms. Jan 17 00:05:29.401716 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:05:29.412450 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:05:29.431274 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:05:29.438619 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:05:29.446338 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:05:29.462302 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:05:29.469078 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:05:29.477816 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:05:29.486566 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:05:29.497727 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:05:29.515450 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:05:29.521429 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:05:29.522324 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:05:29.523092 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:05:29.529068 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:05:29.531104 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:05:29.537625 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:05:29.539077 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:05:29.554952 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:05:29.570916 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:05:29.580360 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:05:29.587407 systemd-resolved[1582]: Positive Trust Anchors: Jan 17 00:05:29.587718 systemd-resolved[1582]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:05:29.587997 systemd-resolved[1582]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:05:29.590396 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:05:29.597484 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:05:29.606409 augenrules[1601]: No rules Jan 17 00:05:29.610202 systemd-resolved[1582]: Using system hostname 'ci-4081.3.6-n-4c16a83c6c'. Jan 17 00:05:29.615648 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:05:29.620475 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:05:29.620653 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:05:29.626427 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:05:29.631920 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:05:29.637639 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:05:29.643779 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:05:29.643928 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:05:29.649809 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:05:29.649945 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:05:29.655140 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:05:29.655271 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:05:29.661350 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:05:29.661483 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:05:29.672458 systemd[1]: Reached target network.target - Network. Jan 17 00:05:29.676659 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:05:29.682268 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:05:29.682333 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:05:29.682677 systemd[1]: Finished ensure-sysext.service. Jan 17 00:05:30.001160 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:05:30.007666 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:05:30.035186 systemd-networkd[1306]: eth0: Gained IPv6LL Jan 17 00:05:30.037765 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:05:30.044074 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:05:32.642777 ldconfig[1266]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:05:32.657539 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:05:32.667262 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:05:32.681299 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:05:32.686550 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:05:32.691223 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:05:32.696707 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:05:32.702619 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:05:32.707704 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:05:32.713228 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:05:32.718556 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:05:32.718585 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:05:32.722543 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:05:32.729080 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:05:32.736557 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:05:32.746583 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:05:32.752105 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:05:32.757631 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:05:32.761882 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:05:32.766068 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:05:32.766174 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:05:32.772133 systemd[1]: Starting chronyd.service - NTP client/server... Jan 17 00:05:32.777150 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:05:32.783200 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 00:05:32.796241 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:05:32.804482 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:05:32.810396 (chronyd)[1623]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 17 00:05:32.814254 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:05:32.818990 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:05:32.819032 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 17 00:05:32.820197 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 17 00:05:32.825256 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 17 00:05:32.830446 chronyd[1635]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 17 00:05:32.832130 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:05:32.832969 KVP[1631]: KVP starting; pid is:1631 Jan 17 00:05:32.835355 jq[1628]: false Jan 17 00:05:32.839181 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:05:32.849458 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:05:32.856395 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:05:32.865351 chronyd[1635]: Timezone right/UTC failed leap second check, ignoring Jan 17 00:05:32.865581 chronyd[1635]: Loaded seccomp filter (level 2) Jan 17 00:05:32.867285 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:05:32.873297 extend-filesystems[1630]: Found loop4 Jan 17 00:05:32.879433 extend-filesystems[1630]: Found loop5 Jan 17 00:05:32.879433 extend-filesystems[1630]: Found loop6 Jan 17 00:05:32.879433 extend-filesystems[1630]: Found loop7 Jan 17 00:05:32.879433 extend-filesystems[1630]: Found sda Jan 17 00:05:32.879433 extend-filesystems[1630]: Found sda1 Jan 17 00:05:32.879433 extend-filesystems[1630]: Found sda2 Jan 17 00:05:32.879433 extend-filesystems[1630]: Found sda3 Jan 17 00:05:32.879433 extend-filesystems[1630]: Found usr Jan 17 00:05:32.879433 extend-filesystems[1630]: Found sda4 Jan 17 00:05:32.879433 extend-filesystems[1630]: Found sda6 Jan 17 00:05:32.879433 extend-filesystems[1630]: Found sda7 Jan 17 00:05:32.879433 extend-filesystems[1630]: Found sda9 Jan 17 00:05:32.879433 extend-filesystems[1630]: Checking size of /dev/sda9 Jan 17 00:05:33.039408 kernel: hv_utils: KVP IC version 4.0 Jan 17 00:05:33.039468 coreos-metadata[1625]: Jan 17 00:05:33.001 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 17 00:05:33.039468 coreos-metadata[1625]: Jan 17 00:05:33.004 INFO Fetch successful Jan 17 00:05:33.039468 coreos-metadata[1625]: Jan 17 00:05:33.004 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 17 00:05:33.039468 coreos-metadata[1625]: Jan 17 00:05:33.008 INFO Fetch successful Jan 17 00:05:33.039468 coreos-metadata[1625]: Jan 17 00:05:33.008 INFO Fetching http://168.63.129.16/machine/abdd79b6-e06f-4a24-b3b2-93ec7976a5aa/2ec5cf58%2D7297%2D4018%2D8e18%2Dc0dfac80b920.%5Fci%2D4081.3.6%2Dn%2D4c16a83c6c?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 17 00:05:33.039468 coreos-metadata[1625]: Jan 17 00:05:33.011 INFO Fetch successful Jan 17 00:05:33.039468 coreos-metadata[1625]: Jan 17 00:05:33.011 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 17 00:05:33.039468 coreos-metadata[1625]: Jan 17 00:05:33.022 INFO Fetch successful Jan 17 00:05:32.923447 dbus-daemon[1626]: [system] SELinux support is enabled Jan 17 00:05:32.882320 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:05:33.058482 extend-filesystems[1630]: Old size kept for /dev/sda9 Jan 17 00:05:33.058482 extend-filesystems[1630]: Found sr0 Jan 17 00:05:32.947660 KVP[1631]: KVP LIC Version: 3.1 Jan 17 00:05:32.905648 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:05:33.095364 dbus-daemon[1626]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 00:05:32.918581 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:05:32.919091 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:05:33.109005 update_engine[1656]: I20260117 00:05:33.055738 1656 main.cc:92] Flatcar Update Engine starting Jan 17 00:05:33.109005 update_engine[1656]: I20260117 00:05:33.069402 1656 update_check_scheduler.cc:74] Next update check in 9m36s Jan 17 00:05:32.921218 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:05:33.109368 jq[1661]: true Jan 17 00:05:32.944782 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:05:32.965781 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:05:32.977410 systemd[1]: Started chronyd.service - NTP client/server. Jan 17 00:05:32.985394 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:05:32.985573 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:05:32.985817 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:05:33.109984 jq[1671]: true Jan 17 00:05:32.988955 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:05:33.009470 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:05:33.009646 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:05:33.021352 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:05:33.044383 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:05:33.044555 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:05:33.054445 systemd-logind[1652]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 00:05:33.054632 systemd-logind[1652]: New seat seat0. Jan 17 00:05:33.062075 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:05:33.074996 (ntainerd)[1672]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:05:33.094019 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:05:33.094082 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:05:33.110386 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:05:33.110408 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:05:33.132921 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 00:05:33.149413 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:05:33.154759 tar[1667]: linux-arm64/LICENSE Jan 17 00:05:33.156836 tar[1667]: linux-arm64/helm Jan 17 00:05:33.160225 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:05:33.167295 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:05:33.228086 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1675) Jan 17 00:05:33.309107 bash[1715]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:05:33.312488 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:05:33.320467 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 00:05:33.514128 locksmithd[1703]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:05:33.831010 containerd[1672]: time="2026-01-17T00:05:33.830915720Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:05:33.915235 containerd[1672]: time="2026-01-17T00:05:33.915189200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:05:33.918250 containerd[1672]: time="2026-01-17T00:05:33.918202280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:05:33.918790 containerd[1672]: time="2026-01-17T00:05:33.918763800Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:05:33.918883 containerd[1672]: time="2026-01-17T00:05:33.918868800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:05:33.919136 containerd[1672]: time="2026-01-17T00:05:33.919115280Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:05:33.921303 containerd[1672]: time="2026-01-17T00:05:33.921147040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:05:33.921303 containerd[1672]: time="2026-01-17T00:05:33.921249720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:05:33.921303 containerd[1672]: time="2026-01-17T00:05:33.921265320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:05:33.922434 containerd[1672]: time="2026-01-17T00:05:33.922273040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:05:33.922434 containerd[1672]: time="2026-01-17T00:05:33.922298400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:05:33.922434 containerd[1672]: time="2026-01-17T00:05:33.922322160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:05:33.922434 containerd[1672]: time="2026-01-17T00:05:33.922333680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:05:33.923109 containerd[1672]: time="2026-01-17T00:05:33.922657760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:05:33.924600 containerd[1672]: time="2026-01-17T00:05:33.923675280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:05:33.926434 containerd[1672]: time="2026-01-17T00:05:33.926167280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:05:33.927072 containerd[1672]: time="2026-01-17T00:05:33.926516920Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:05:33.931395 containerd[1672]: time="2026-01-17T00:05:33.926645800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:05:33.931395 containerd[1672]: time="2026-01-17T00:05:33.930521880Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:05:33.942957 containerd[1672]: time="2026-01-17T00:05:33.942921320Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:05:33.943101 containerd[1672]: time="2026-01-17T00:05:33.943086920Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:05:33.943398 containerd[1672]: time="2026-01-17T00:05:33.943377320Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:05:33.944343 containerd[1672]: time="2026-01-17T00:05:33.943462720Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:05:33.944343 containerd[1672]: time="2026-01-17T00:05:33.943484520Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:05:33.944343 containerd[1672]: time="2026-01-17T00:05:33.943645360Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:05:33.944343 containerd[1672]: time="2026-01-17T00:05:33.943899720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:05:33.944343 containerd[1672]: time="2026-01-17T00:05:33.943999240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:05:33.944343 containerd[1672]: time="2026-01-17T00:05:33.944015120Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:05:33.944343 containerd[1672]: time="2026-01-17T00:05:33.944029360Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:05:33.944545 containerd[1672]: time="2026-01-17T00:05:33.944520960Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:05:33.944628 containerd[1672]: time="2026-01-17T00:05:33.944615040Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:05:33.944690 containerd[1672]: time="2026-01-17T00:05:33.944668240Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:05:33.945613 containerd[1672]: time="2026-01-17T00:05:33.944738600Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:05:33.945613 containerd[1672]: time="2026-01-17T00:05:33.945481040Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:05:33.945613 containerd[1672]: time="2026-01-17T00:05:33.945508560Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:05:33.945613 containerd[1672]: time="2026-01-17T00:05:33.945522440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:05:33.945613 containerd[1672]: time="2026-01-17T00:05:33.945548240Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:05:33.945613 containerd[1672]: time="2026-01-17T00:05:33.945571600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:05:33.945613 containerd[1672]: time="2026-01-17T00:05:33.945585760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:05:33.945613 containerd[1672]: time="2026-01-17T00:05:33.945597920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:05:33.946143 containerd[1672]: time="2026-01-17T00:05:33.946079760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:05:33.946143 containerd[1672]: time="2026-01-17T00:05:33.946110840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:05:33.946143 containerd[1672]: time="2026-01-17T00:05:33.946124800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:05:33.946256 containerd[1672]: time="2026-01-17T00:05:33.946242120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:05:33.946325 containerd[1672]: time="2026-01-17T00:05:33.946313120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:05:33.947334 containerd[1672]: time="2026-01-17T00:05:33.947200040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:05:33.947334 containerd[1672]: time="2026-01-17T00:05:33.947226280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:05:33.947334 containerd[1672]: time="2026-01-17T00:05:33.947243240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:05:33.947334 containerd[1672]: time="2026-01-17T00:05:33.947264080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:05:33.947334 containerd[1672]: time="2026-01-17T00:05:33.947294200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:05:33.947334 containerd[1672]: time="2026-01-17T00:05:33.947312600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:05:33.949601 containerd[1672]: time="2026-01-17T00:05:33.949578760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:05:33.949705 containerd[1672]: time="2026-01-17T00:05:33.949691240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:05:33.949766 containerd[1672]: time="2026-01-17T00:05:33.949744320Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:05:33.950307 containerd[1672]: time="2026-01-17T00:05:33.950288360Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:05:33.951434 containerd[1672]: time="2026-01-17T00:05:33.950422040Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:05:33.951434 containerd[1672]: time="2026-01-17T00:05:33.950441840Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:05:33.951434 containerd[1672]: time="2026-01-17T00:05:33.950463840Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:05:33.951434 containerd[1672]: time="2026-01-17T00:05:33.950475360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:05:33.951434 containerd[1672]: time="2026-01-17T00:05:33.950490160Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:05:33.951434 containerd[1672]: time="2026-01-17T00:05:33.950499880Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:05:33.951434 containerd[1672]: time="2026-01-17T00:05:33.950511360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:05:33.952292 containerd[1672]: time="2026-01-17T00:05:33.951978200Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:05:33.952292 containerd[1672]: time="2026-01-17T00:05:33.952061080Z" level=info msg="Connect containerd service" Jan 17 00:05:33.952292 containerd[1672]: time="2026-01-17T00:05:33.952102600Z" level=info msg="using legacy CRI server" Jan 17 00:05:33.952292 containerd[1672]: time="2026-01-17T00:05:33.952110440Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:05:33.952292 containerd[1672]: time="2026-01-17T00:05:33.952231560Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:05:33.957443 containerd[1672]: time="2026-01-17T00:05:33.957378320Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:05:33.958184 containerd[1672]: time="2026-01-17T00:05:33.957526600Z" level=info msg="Start subscribing containerd event" Jan 17 00:05:33.958184 containerd[1672]: time="2026-01-17T00:05:33.957587320Z" level=info msg="Start recovering state" Jan 17 00:05:33.958184 containerd[1672]: time="2026-01-17T00:05:33.957659600Z" level=info msg="Start event monitor" Jan 17 00:05:33.958184 containerd[1672]: time="2026-01-17T00:05:33.957678960Z" level=info msg="Start snapshots syncer" Jan 17 00:05:33.958184 containerd[1672]: time="2026-01-17T00:05:33.957689160Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:05:33.958184 containerd[1672]: time="2026-01-17T00:05:33.957696720Z" level=info msg="Start streaming server" Jan 17 00:05:33.968712 containerd[1672]: time="2026-01-17T00:05:33.958539680Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:05:33.968712 containerd[1672]: time="2026-01-17T00:05:33.958596440Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:05:33.968712 containerd[1672]: time="2026-01-17T00:05:33.958654680Z" level=info msg="containerd successfully booted in 0.132011s" Jan 17 00:05:33.958805 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:05:34.005395 tar[1667]: linux-arm64/README.md Jan 17 00:05:34.022780 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:05:34.081215 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:05:34.091446 (kubelet)[1760]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:05:34.450127 kubelet[1760]: E0117 00:05:34.449509 1760 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:05:34.452257 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:05:34.452510 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:05:34.557946 sshd_keygen[1653]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:05:34.578120 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:05:34.590265 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:05:34.597277 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 17 00:05:34.602618 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:05:34.602788 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:05:34.609829 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:05:34.624839 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:05:34.636171 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 17 00:05:34.643518 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:05:34.649277 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 17 00:05:34.654353 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:05:34.658387 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:05:34.664772 systemd[1]: Startup finished in 622ms (kernel) + 31.196s (initrd) + 14.825s (userspace) = 46.644s. Jan 17 00:05:34.990591 login[1789]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jan 17 00:05:34.992760 login[1790]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:35.006095 systemd-logind[1652]: New session 2 of user core. Jan 17 00:05:35.008083 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:05:35.015281 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:05:35.043517 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:05:35.051541 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:05:35.055021 (systemd)[1797]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:05:35.182235 systemd[1797]: Queued start job for default target default.target. Jan 17 00:05:35.188516 systemd[1797]: Created slice app.slice - User Application Slice. Jan 17 00:05:35.188548 systemd[1797]: Reached target paths.target - Paths. Jan 17 00:05:35.188560 systemd[1797]: Reached target timers.target - Timers. Jan 17 00:05:35.190030 systemd[1797]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:05:35.201098 systemd[1797]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:05:35.201218 systemd[1797]: Reached target sockets.target - Sockets. Jan 17 00:05:35.201231 systemd[1797]: Reached target basic.target - Basic System. Jan 17 00:05:35.201468 systemd[1797]: Reached target default.target - Main User Target. Jan 17 00:05:35.201505 systemd[1797]: Startup finished in 140ms. Jan 17 00:05:35.201517 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:05:35.207207 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:05:35.990964 login[1789]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:05:35.995210 systemd-logind[1652]: New session 1 of user core. Jan 17 00:05:36.004218 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:05:36.205840 waagent[1787]: 2026-01-17T00:05:36.205738Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 17 00:05:36.210889 waagent[1787]: 2026-01-17T00:05:36.210825Z INFO Daemon Daemon OS: flatcar 4081.3.6 Jan 17 00:05:36.214890 waagent[1787]: 2026-01-17T00:05:36.214840Z INFO Daemon Daemon Python: 3.11.9 Jan 17 00:05:36.218996 waagent[1787]: 2026-01-17T00:05:36.218772Z INFO Daemon Daemon Run daemon Jan 17 00:05:36.222564 waagent[1787]: 2026-01-17T00:05:36.222512Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Jan 17 00:05:36.230163 waagent[1787]: 2026-01-17T00:05:36.230110Z INFO Daemon Daemon Using waagent for provisioning Jan 17 00:05:36.234677 waagent[1787]: 2026-01-17T00:05:36.234633Z INFO Daemon Daemon Activate resource disk Jan 17 00:05:36.238754 waagent[1787]: 2026-01-17T00:05:36.238710Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 17 00:05:36.248884 waagent[1787]: 2026-01-17T00:05:36.248801Z INFO Daemon Daemon Found device: None Jan 17 00:05:36.252768 waagent[1787]: 2026-01-17T00:05:36.252722Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 17 00:05:36.260224 waagent[1787]: 2026-01-17T00:05:36.260171Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 17 00:05:36.271437 waagent[1787]: 2026-01-17T00:05:36.271381Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 17 00:05:36.276478 waagent[1787]: 2026-01-17T00:05:36.276434Z INFO Daemon Daemon Running default provisioning handler Jan 17 00:05:36.287989 waagent[1787]: 2026-01-17T00:05:36.287924Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 17 00:05:36.298588 waagent[1787]: 2026-01-17T00:05:36.298530Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 17 00:05:36.306298 waagent[1787]: 2026-01-17T00:05:36.306246Z INFO Daemon Daemon cloud-init is enabled: False Jan 17 00:05:36.310140 waagent[1787]: 2026-01-17T00:05:36.310099Z INFO Daemon Daemon Copying ovf-env.xml Jan 17 00:05:36.391373 waagent[1787]: 2026-01-17T00:05:36.390709Z INFO Daemon Daemon Successfully mounted dvd Jan 17 00:05:36.421571 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 17 00:05:36.424076 waagent[1787]: 2026-01-17T00:05:36.423281Z INFO Daemon Daemon Detect protocol endpoint Jan 17 00:05:36.427388 waagent[1787]: 2026-01-17T00:05:36.427335Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 17 00:05:36.432482 waagent[1787]: 2026-01-17T00:05:36.432431Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 17 00:05:36.437933 waagent[1787]: 2026-01-17T00:05:36.437893Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 17 00:05:36.442562 waagent[1787]: 2026-01-17T00:05:36.442515Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 17 00:05:36.446870 waagent[1787]: 2026-01-17T00:05:36.446831Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 17 00:05:36.486312 waagent[1787]: 2026-01-17T00:05:36.486262Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 17 00:05:36.492290 waagent[1787]: 2026-01-17T00:05:36.492247Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 17 00:05:36.496528 waagent[1787]: 2026-01-17T00:05:36.496488Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 17 00:05:36.810287 waagent[1787]: 2026-01-17T00:05:36.810198Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 17 00:05:36.818066 waagent[1787]: 2026-01-17T00:05:36.815971Z INFO Daemon Daemon Forcing an update of the goal state. Jan 17 00:05:36.824243 waagent[1787]: 2026-01-17T00:05:36.824195Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 17 00:05:36.842263 waagent[1787]: 2026-01-17T00:05:36.842220Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 17 00:05:36.847336 waagent[1787]: 2026-01-17T00:05:36.847282Z INFO Daemon Jan 17 00:05:36.849749 waagent[1787]: 2026-01-17T00:05:36.849707Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: ebaad39b-0975-4847-a77b-51edd8cceecb eTag: 16401410940189715685 source: Fabric] Jan 17 00:05:36.859899 waagent[1787]: 2026-01-17T00:05:36.859847Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 17 00:05:36.866276 waagent[1787]: 2026-01-17T00:05:36.866223Z INFO Daemon Jan 17 00:05:36.868744 waagent[1787]: 2026-01-17T00:05:36.868691Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 17 00:05:36.878809 waagent[1787]: 2026-01-17T00:05:36.878769Z INFO Daemon Daemon Downloading artifacts profile blob Jan 17 00:05:36.949600 waagent[1787]: 2026-01-17T00:05:36.949509Z INFO Daemon Downloaded certificate {'thumbprint': 'B821667CB418628ADC68614E85647EE9CA4B457A', 'hasPrivateKey': True} Jan 17 00:05:36.957751 waagent[1787]: 2026-01-17T00:05:36.957697Z INFO Daemon Fetch goal state completed Jan 17 00:05:36.968084 waagent[1787]: 2026-01-17T00:05:36.968016Z INFO Daemon Daemon Starting provisioning Jan 17 00:05:36.972179 waagent[1787]: 2026-01-17T00:05:36.972131Z INFO Daemon Daemon Handle ovf-env.xml. Jan 17 00:05:36.975961 waagent[1787]: 2026-01-17T00:05:36.975923Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-4c16a83c6c] Jan 17 00:05:37.002079 waagent[1787]: 2026-01-17T00:05:37.001763Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-4c16a83c6c] Jan 17 00:05:37.007176 waagent[1787]: 2026-01-17T00:05:37.007113Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 17 00:05:37.012402 waagent[1787]: 2026-01-17T00:05:37.012350Z INFO Daemon Daemon Primary interface is [eth0] Jan 17 00:05:37.045567 systemd-networkd[1306]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:05:37.045574 systemd-networkd[1306]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:05:37.045622 systemd-networkd[1306]: eth0: DHCP lease lost Jan 17 00:05:37.047080 waagent[1787]: 2026-01-17T00:05:37.046914Z INFO Daemon Daemon Create user account if not exists Jan 17 00:05:37.051499 waagent[1787]: 2026-01-17T00:05:37.051447Z INFO Daemon Daemon User core already exists, skip useradd Jan 17 00:05:37.055958 waagent[1787]: 2026-01-17T00:05:37.055915Z INFO Daemon Daemon Configure sudoer Jan 17 00:05:37.059891 waagent[1787]: 2026-01-17T00:05:37.059835Z INFO Daemon Daemon Configure sshd Jan 17 00:05:37.063993 waagent[1787]: 2026-01-17T00:05:37.063911Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 17 00:05:37.074326 waagent[1787]: 2026-01-17T00:05:37.074274Z INFO Daemon Daemon Deploy ssh public key. Jan 17 00:05:37.080205 systemd-networkd[1306]: eth0: DHCPv6 lease lost Jan 17 00:05:37.097094 systemd-networkd[1306]: eth0: DHCPv4 address 10.200.20.43/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 17 00:05:38.185576 waagent[1787]: 2026-01-17T00:05:38.181795Z INFO Daemon Daemon Provisioning complete Jan 17 00:05:38.198241 waagent[1787]: 2026-01-17T00:05:38.198189Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 17 00:05:38.203071 waagent[1787]: 2026-01-17T00:05:38.203011Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 17 00:05:38.210479 waagent[1787]: 2026-01-17T00:05:38.210436Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 17 00:05:38.345719 waagent[1847]: 2026-01-17T00:05:38.345011Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 17 00:05:38.345719 waagent[1847]: 2026-01-17T00:05:38.345204Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Jan 17 00:05:38.345719 waagent[1847]: 2026-01-17T00:05:38.345261Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 17 00:05:39.052077 waagent[1847]: 2026-01-17T00:05:39.051949Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 17 00:05:39.054075 waagent[1847]: 2026-01-17T00:05:39.052412Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 17 00:05:39.054075 waagent[1847]: 2026-01-17T00:05:39.052497Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 17 00:05:39.060620 waagent[1847]: 2026-01-17T00:05:39.060544Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 17 00:05:39.068342 waagent[1847]: 2026-01-17T00:05:39.068293Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 17 00:05:39.069007 waagent[1847]: 2026-01-17T00:05:39.068963Z INFO ExtHandler Jan 17 00:05:39.069200 waagent[1847]: 2026-01-17T00:05:39.069161Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 4788ced3-8f20-48ad-95e0-ef442ba85ac3 eTag: 16401410940189715685 source: Fabric] Jan 17 00:05:39.069590 waagent[1847]: 2026-01-17T00:05:39.069547Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 17 00:05:39.082186 waagent[1847]: 2026-01-17T00:05:39.082093Z INFO ExtHandler Jan 17 00:05:39.082419 waagent[1847]: 2026-01-17T00:05:39.082380Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 17 00:05:39.086701 waagent[1847]: 2026-01-17T00:05:39.086663Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 17 00:05:39.184344 waagent[1847]: 2026-01-17T00:05:39.184229Z INFO ExtHandler Downloaded certificate {'thumbprint': 'B821667CB418628ADC68614E85647EE9CA4B457A', 'hasPrivateKey': True} Jan 17 00:05:39.185120 waagent[1847]: 2026-01-17T00:05:39.185070Z INFO ExtHandler Fetch goal state completed Jan 17 00:05:39.203578 waagent[1847]: 2026-01-17T00:05:39.203518Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1847 Jan 17 00:05:39.203861 waagent[1847]: 2026-01-17T00:05:39.203820Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 17 00:05:39.205610 waagent[1847]: 2026-01-17T00:05:39.205563Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Jan 17 00:05:39.206121 waagent[1847]: 2026-01-17T00:05:39.206020Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 17 00:05:39.237111 waagent[1847]: 2026-01-17T00:05:39.237065Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 17 00:05:39.237462 waagent[1847]: 2026-01-17T00:05:39.237419Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 17 00:05:39.243422 waagent[1847]: 2026-01-17T00:05:39.243382Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 17 00:05:39.249942 systemd[1]: Reloading requested from client PID 1862 ('systemctl') (unit waagent.service)... Jan 17 00:05:39.250197 systemd[1]: Reloading... Jan 17 00:05:39.330074 zram_generator::config[1899]: No configuration found. Jan 17 00:05:39.437082 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:05:39.512559 systemd[1]: Reloading finished in 262 ms. Jan 17 00:05:39.536069 waagent[1847]: 2026-01-17T00:05:39.533274Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 17 00:05:39.540142 systemd[1]: Reloading requested from client PID 1950 ('systemctl') (unit waagent.service)... Jan 17 00:05:39.540154 systemd[1]: Reloading... Jan 17 00:05:39.622127 zram_generator::config[1985]: No configuration found. Jan 17 00:05:39.725155 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:05:39.800607 systemd[1]: Reloading finished in 260 ms. Jan 17 00:05:39.827065 waagent[1847]: 2026-01-17T00:05:39.824328Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 17 00:05:39.827065 waagent[1847]: 2026-01-17T00:05:39.824491Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 17 00:05:40.181580 waagent[1847]: 2026-01-17T00:05:40.181493Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 17 00:05:40.182212 waagent[1847]: 2026-01-17T00:05:40.182161Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 17 00:05:40.183082 waagent[1847]: 2026-01-17T00:05:40.183006Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 17 00:05:40.183247 waagent[1847]: 2026-01-17T00:05:40.183158Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 17 00:05:40.183523 waagent[1847]: 2026-01-17T00:05:40.183291Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 17 00:05:40.183724 waagent[1847]: 2026-01-17T00:05:40.183668Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 17 00:05:40.184087 waagent[1847]: 2026-01-17T00:05:40.184017Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 17 00:05:40.184392 waagent[1847]: 2026-01-17T00:05:40.184345Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 17 00:05:40.184454 waagent[1847]: 2026-01-17T00:05:40.184426Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 17 00:05:40.184583 waagent[1847]: 2026-01-17T00:05:40.184551Z INFO EnvHandler ExtHandler Configure routes Jan 17 00:05:40.184639 waagent[1847]: 2026-01-17T00:05:40.184613Z INFO EnvHandler ExtHandler Gateway:None Jan 17 00:05:40.184685 waagent[1847]: 2026-01-17T00:05:40.184661Z INFO EnvHandler ExtHandler Routes:None Jan 17 00:05:40.186078 waagent[1847]: 2026-01-17T00:05:40.185275Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 17 00:05:40.186078 waagent[1847]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 17 00:05:40.186078 waagent[1847]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 17 00:05:40.186078 waagent[1847]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 17 00:05:40.186078 waagent[1847]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 17 00:05:40.186078 waagent[1847]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 17 00:05:40.186078 waagent[1847]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 17 00:05:40.186078 waagent[1847]: 2026-01-17T00:05:40.185445Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 17 00:05:40.186518 waagent[1847]: 2026-01-17T00:05:40.186309Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 17 00:05:40.187518 waagent[1847]: 2026-01-17T00:05:40.187365Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 17 00:05:40.187518 waagent[1847]: 2026-01-17T00:05:40.187442Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 17 00:05:40.187637 waagent[1847]: 2026-01-17T00:05:40.187595Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 17 00:05:40.196788 waagent[1847]: 2026-01-17T00:05:40.196739Z INFO ExtHandler ExtHandler Jan 17 00:05:40.198093 waagent[1847]: 2026-01-17T00:05:40.196978Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 7ff8b5bd-6f4d-4184-8cc1-aa01e746afd3 correlation 15bb6efc-89bc-45e3-b101-844a2aa6df9d created: 2026-01-17T00:04:19.096718Z] Jan 17 00:05:40.198230 waagent[1847]: 2026-01-17T00:05:40.198173Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 17 00:05:40.199006 waagent[1847]: 2026-01-17T00:05:40.198958Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Jan 17 00:05:40.228744 waagent[1847]: 2026-01-17T00:05:40.228285Z INFO MonitorHandler ExtHandler Network interfaces: Jan 17 00:05:40.228744 waagent[1847]: Executing ['ip', '-a', '-o', 'link']: Jan 17 00:05:40.228744 waagent[1847]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 17 00:05:40.228744 waagent[1847]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:78:ae:f6 brd ff:ff:ff:ff:ff:ff Jan 17 00:05:40.228744 waagent[1847]: 3: enP2086s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:78:ae:f6 brd ff:ff:ff:ff:ff:ff\ altname enP2086p0s2 Jan 17 00:05:40.228744 waagent[1847]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 17 00:05:40.228744 waagent[1847]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 17 00:05:40.228744 waagent[1847]: 2: eth0 inet 10.200.20.43/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 17 00:05:40.228744 waagent[1847]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 17 00:05:40.228744 waagent[1847]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 17 00:05:40.228744 waagent[1847]: 2: eth0 inet6 fe80::7eed:8dff:fe78:aef6/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 17 00:05:40.247312 waagent[1847]: 2026-01-17T00:05:40.247250Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 1D9CF795-A079-488C-8E34-7D6E3471557F;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 17 00:05:40.325946 waagent[1847]: 2026-01-17T00:05:40.325861Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 17 00:05:40.325946 waagent[1847]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:05:40.325946 waagent[1847]: pkts bytes target prot opt in out source destination Jan 17 00:05:40.325946 waagent[1847]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:05:40.325946 waagent[1847]: pkts bytes target prot opt in out source destination Jan 17 00:05:40.325946 waagent[1847]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:05:40.325946 waagent[1847]: pkts bytes target prot opt in out source destination Jan 17 00:05:40.325946 waagent[1847]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 17 00:05:40.325946 waagent[1847]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 17 00:05:40.325946 waagent[1847]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 17 00:05:40.328854 waagent[1847]: 2026-01-17T00:05:40.328797Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 17 00:05:40.328854 waagent[1847]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:05:40.328854 waagent[1847]: pkts bytes target prot opt in out source destination Jan 17 00:05:40.328854 waagent[1847]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:05:40.328854 waagent[1847]: pkts bytes target prot opt in out source destination Jan 17 00:05:40.328854 waagent[1847]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:05:40.328854 waagent[1847]: pkts bytes target prot opt in out source destination Jan 17 00:05:40.328854 waagent[1847]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 17 00:05:40.328854 waagent[1847]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 17 00:05:40.328854 waagent[1847]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 17 00:05:40.329164 waagent[1847]: 2026-01-17T00:05:40.329072Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 17 00:05:44.489352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:05:44.499238 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:05:44.608679 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:05:44.615570 (kubelet)[2077]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:05:44.651492 kubelet[2077]: E0117 00:05:44.651443 2077 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:05:44.655087 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:05:44.655233 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:05:54.739427 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:05:54.750264 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:05:54.868635 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:05:54.873372 (kubelet)[2092]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:05:54.942423 kubelet[2092]: E0117 00:05:54.942364 2092 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:05:54.945168 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:05:54.945420 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:05:56.650191 chronyd[1635]: Selected source PHC0 Jan 17 00:06:04.989445 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 00:06:04.999227 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:06:05.097081 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:06:05.101104 (kubelet)[2107]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:06:05.132541 kubelet[2107]: E0117 00:06:05.132455 2107 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:06:05.135288 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:06:05.135547 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:06:05.346608 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:06:05.348183 systemd[1]: Started sshd@0-10.200.20.43:22-10.200.16.10:33450.service - OpenSSH per-connection server daemon (10.200.16.10:33450). Jan 17 00:06:06.063699 sshd[2115]: Accepted publickey for core from 10.200.16.10 port 33450 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:06:06.065124 sshd[2115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:06:06.069112 systemd-logind[1652]: New session 3 of user core. Jan 17 00:06:06.076213 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:06:06.469954 systemd[1]: Started sshd@1-10.200.20.43:22-10.200.16.10:33464.service - OpenSSH per-connection server daemon (10.200.16.10:33464). Jan 17 00:06:06.921890 sshd[2120]: Accepted publickey for core from 10.200.16.10 port 33464 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:06:06.923348 sshd[2120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:06:06.928131 systemd-logind[1652]: New session 4 of user core. Jan 17 00:06:06.931212 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:06:07.253982 sshd[2120]: pam_unix(sshd:session): session closed for user core Jan 17 00:06:07.257264 systemd[1]: sshd@1-10.200.20.43:22-10.200.16.10:33464.service: Deactivated successfully. Jan 17 00:06:07.258737 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:06:07.259548 systemd-logind[1652]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:06:07.260690 systemd-logind[1652]: Removed session 4. Jan 17 00:06:07.345355 systemd[1]: Started sshd@2-10.200.20.43:22-10.200.16.10:33472.service - OpenSSH per-connection server daemon (10.200.16.10:33472). Jan 17 00:06:07.834525 sshd[2127]: Accepted publickey for core from 10.200.16.10 port 33472 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:06:07.835907 sshd[2127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:06:07.840717 systemd-logind[1652]: New session 5 of user core. Jan 17 00:06:07.848233 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:06:08.183411 sshd[2127]: pam_unix(sshd:session): session closed for user core Jan 17 00:06:08.187226 systemd[1]: sshd@2-10.200.20.43:22-10.200.16.10:33472.service: Deactivated successfully. Jan 17 00:06:08.188807 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:06:08.189456 systemd-logind[1652]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:06:08.190528 systemd-logind[1652]: Removed session 5. Jan 17 00:06:08.263532 systemd[1]: Started sshd@3-10.200.20.43:22-10.200.16.10:33474.service - OpenSSH per-connection server daemon (10.200.16.10:33474). Jan 17 00:06:08.712740 sshd[2134]: Accepted publickey for core from 10.200.16.10 port 33474 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:06:08.714122 sshd[2134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:06:08.718970 systemd-logind[1652]: New session 6 of user core. Jan 17 00:06:08.731302 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:06:09.044838 sshd[2134]: pam_unix(sshd:session): session closed for user core Jan 17 00:06:09.048638 systemd-logind[1652]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:06:09.049220 systemd[1]: sshd@3-10.200.20.43:22-10.200.16.10:33474.service: Deactivated successfully. Jan 17 00:06:09.050854 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:06:09.051866 systemd-logind[1652]: Removed session 6. Jan 17 00:06:09.132807 systemd[1]: Started sshd@4-10.200.20.43:22-10.200.16.10:33490.service - OpenSSH per-connection server daemon (10.200.16.10:33490). Jan 17 00:06:09.623894 sshd[2141]: Accepted publickey for core from 10.200.16.10 port 33490 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:06:09.625266 sshd[2141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:06:09.630060 systemd-logind[1652]: New session 7 of user core. Jan 17 00:06:09.635188 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:06:10.101466 sudo[2144]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:06:10.101735 sudo[2144]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:06:10.113183 sudo[2144]: pam_unix(sudo:session): session closed for user root Jan 17 00:06:10.188729 sshd[2141]: pam_unix(sshd:session): session closed for user core Jan 17 00:06:10.192366 systemd[1]: sshd@4-10.200.20.43:22-10.200.16.10:33490.service: Deactivated successfully. Jan 17 00:06:10.194387 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:06:10.195645 systemd-logind[1652]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:06:10.196607 systemd-logind[1652]: Removed session 7. Jan 17 00:06:10.277224 systemd[1]: Started sshd@5-10.200.20.43:22-10.200.16.10:47644.service - OpenSSH per-connection server daemon (10.200.16.10:47644). Jan 17 00:06:10.768946 sshd[2149]: Accepted publickey for core from 10.200.16.10 port 47644 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:06:10.770380 sshd[2149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:06:10.774169 systemd-logind[1652]: New session 8 of user core. Jan 17 00:06:10.782243 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:06:11.046101 sudo[2153]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:06:11.046902 sudo[2153]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:06:11.050139 sudo[2153]: pam_unix(sudo:session): session closed for user root Jan 17 00:06:11.055038 sudo[2152]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:06:11.055391 sudo[2152]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:06:11.069414 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:06:11.070850 auditctl[2156]: No rules Jan 17 00:06:11.071305 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:06:11.071474 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:06:11.073996 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:06:11.099109 augenrules[2174]: No rules Jan 17 00:06:11.100757 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:06:11.102276 sudo[2152]: pam_unix(sudo:session): session closed for user root Jan 17 00:06:11.180246 sshd[2149]: pam_unix(sshd:session): session closed for user core Jan 17 00:06:11.184125 systemd[1]: sshd@5-10.200.20.43:22-10.200.16.10:47644.service: Deactivated successfully. Jan 17 00:06:11.185710 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:06:11.187263 systemd-logind[1652]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:06:11.188389 systemd-logind[1652]: Removed session 8. Jan 17 00:06:11.285280 systemd[1]: Started sshd@6-10.200.20.43:22-10.200.16.10:47650.service - OpenSSH per-connection server daemon (10.200.16.10:47650). Jan 17 00:06:11.769187 sshd[2182]: Accepted publickey for core from 10.200.16.10 port 47650 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:06:11.770601 sshd[2182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:06:11.775493 systemd-logind[1652]: New session 9 of user core. Jan 17 00:06:11.781250 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:06:12.044026 sudo[2185]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:06:12.044332 sudo[2185]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:06:12.992283 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:06:12.992878 (dockerd)[2201]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:06:13.695538 dockerd[2201]: time="2026-01-17T00:06:13.695485142Z" level=info msg="Starting up" Jan 17 00:06:14.098352 dockerd[2201]: time="2026-01-17T00:06:14.098310004Z" level=info msg="Loading containers: start." Jan 17 00:06:14.261070 kernel: Initializing XFRM netlink socket Jan 17 00:06:14.430208 systemd-networkd[1306]: docker0: Link UP Jan 17 00:06:14.448923 dockerd[2201]: time="2026-01-17T00:06:14.448350326Z" level=info msg="Loading containers: done." Jan 17 00:06:14.460328 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3710341282-merged.mount: Deactivated successfully. Jan 17 00:06:14.473913 dockerd[2201]: time="2026-01-17T00:06:14.473856395Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:06:14.474388 dockerd[2201]: time="2026-01-17T00:06:14.474252276Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:06:14.474535 dockerd[2201]: time="2026-01-17T00:06:14.474490196Z" level=info msg="Daemon has completed initialization" Jan 17 00:06:14.525794 dockerd[2201]: time="2026-01-17T00:06:14.525408895Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:06:14.525710 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:06:15.239273 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 17 00:06:15.247223 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:06:15.346802 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:06:15.351065 (kubelet)[2344]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:06:15.387819 kubelet[2344]: E0117 00:06:15.387766 2344 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:06:15.390660 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:06:15.390822 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:06:15.426366 containerd[1672]: time="2026-01-17T00:06:15.426324329Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 17 00:06:16.288065 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 17 00:06:16.568791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2502479583.mount: Deactivated successfully. Jan 17 00:06:17.882091 containerd[1672]: time="2026-01-17T00:06:17.881637509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:17.883829 containerd[1672]: time="2026-01-17T00:06:17.883796471Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=24571040" Jan 17 00:06:17.886260 containerd[1672]: time="2026-01-17T00:06:17.886230154Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:17.891211 containerd[1672]: time="2026-01-17T00:06:17.891146680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:17.892191 containerd[1672]: time="2026-01-17T00:06:17.892158721Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 2.465793552s" Jan 17 00:06:17.892423 containerd[1672]: time="2026-01-17T00:06:17.892283281Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Jan 17 00:06:17.893282 containerd[1672]: time="2026-01-17T00:06:17.893252082Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 17 00:06:18.024133 update_engine[1656]: I20260117 00:06:18.024073 1656 update_attempter.cc:509] Updating boot flags... Jan 17 00:06:18.095414 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2420) Jan 17 00:06:18.174096 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2421) Jan 17 00:06:19.559386 containerd[1672]: time="2026-01-17T00:06:19.559336516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:19.561794 containerd[1672]: time="2026-01-17T00:06:19.561763718Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19135477" Jan 17 00:06:19.565610 containerd[1672]: time="2026-01-17T00:06:19.565584323Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:19.570721 containerd[1672]: time="2026-01-17T00:06:19.569602487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:19.570721 containerd[1672]: time="2026-01-17T00:06:19.570605209Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 1.677226767s" Jan 17 00:06:19.570721 containerd[1672]: time="2026-01-17T00:06:19.570634249Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Jan 17 00:06:19.571178 containerd[1672]: time="2026-01-17T00:06:19.571154169Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 17 00:06:21.292086 containerd[1672]: time="2026-01-17T00:06:21.291704207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:21.294310 containerd[1672]: time="2026-01-17T00:06:21.294079770Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=14191716" Jan 17 00:06:21.296715 containerd[1672]: time="2026-01-17T00:06:21.296649453Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:21.302030 containerd[1672]: time="2026-01-17T00:06:21.300774499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:21.302030 containerd[1672]: time="2026-01-17T00:06:21.301907660Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 1.730719931s" Jan 17 00:06:21.302030 containerd[1672]: time="2026-01-17T00:06:21.301938220Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Jan 17 00:06:21.303003 containerd[1672]: time="2026-01-17T00:06:21.302970261Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 17 00:06:22.561841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3104040451.mount: Deactivated successfully. Jan 17 00:06:22.804676 containerd[1672]: time="2026-01-17T00:06:22.804622939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:22.806736 containerd[1672]: time="2026-01-17T00:06:22.806575341Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=22805253" Jan 17 00:06:22.809875 containerd[1672]: time="2026-01-17T00:06:22.809821065Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:22.814418 containerd[1672]: time="2026-01-17T00:06:22.814277191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:22.815512 containerd[1672]: time="2026-01-17T00:06:22.814853471Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.511735169s" Jan 17 00:06:22.815512 containerd[1672]: time="2026-01-17T00:06:22.814887191Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Jan 17 00:06:22.815512 containerd[1672]: time="2026-01-17T00:06:22.815362552Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 17 00:06:23.615121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount816293381.mount: Deactivated successfully. Jan 17 00:06:24.674420 containerd[1672]: time="2026-01-17T00:06:24.674362396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:24.676733 containerd[1672]: time="2026-01-17T00:06:24.676702919Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Jan 17 00:06:24.679207 containerd[1672]: time="2026-01-17T00:06:24.679146962Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:24.683806 containerd[1672]: time="2026-01-17T00:06:24.683751328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:24.685344 containerd[1672]: time="2026-01-17T00:06:24.684829089Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.869436177s" Jan 17 00:06:24.685344 containerd[1672]: time="2026-01-17T00:06:24.684865449Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Jan 17 00:06:24.685938 containerd[1672]: time="2026-01-17T00:06:24.685761810Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 17 00:06:25.199932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount446536174.mount: Deactivated successfully. Jan 17 00:06:25.214305 containerd[1672]: time="2026-01-17T00:06:25.214254471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:25.216543 containerd[1672]: time="2026-01-17T00:06:25.216426554Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Jan 17 00:06:25.219464 containerd[1672]: time="2026-01-17T00:06:25.219416997Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:25.227261 containerd[1672]: time="2026-01-17T00:06:25.227206727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:25.228081 containerd[1672]: time="2026-01-17T00:06:25.227932368Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 542.136318ms" Jan 17 00:06:25.228081 containerd[1672]: time="2026-01-17T00:06:25.227968608Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Jan 17 00:06:25.228936 containerd[1672]: time="2026-01-17T00:06:25.228563849Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 17 00:06:25.489308 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 17 00:06:25.497228 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:06:26.012781 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:06:26.017360 (kubelet)[2554]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:06:26.051603 kubelet[2554]: E0117 00:06:26.051554 2554 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:06:26.053617 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:06:26.053742 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:06:27.071446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3044232952.mount: Deactivated successfully. Jan 17 00:06:30.455221 containerd[1672]: time="2026-01-17T00:06:30.455173491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:30.457758 containerd[1672]: time="2026-01-17T00:06:30.457726378Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=98062987" Jan 17 00:06:30.460073 containerd[1672]: time="2026-01-17T00:06:30.460007424Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:30.465715 containerd[1672]: time="2026-01-17T00:06:30.465264958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:30.466529 containerd[1672]: time="2026-01-17T00:06:30.466476402Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 5.237879952s" Jan 17 00:06:30.466584 containerd[1672]: time="2026-01-17T00:06:30.466530762Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Jan 17 00:06:35.435148 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:06:35.443277 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:06:35.470660 systemd[1]: Reloading requested from client PID 2642 ('systemctl') (unit session-9.scope)... Jan 17 00:06:35.470676 systemd[1]: Reloading... Jan 17 00:06:35.571242 zram_generator::config[2686]: No configuration found. Jan 17 00:06:35.683227 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:06:35.761471 systemd[1]: Reloading finished in 290 ms. Jan 17 00:06:35.799883 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 00:06:35.799956 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 00:06:35.800331 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:06:35.805383 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:06:36.000120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:06:36.008313 (kubelet)[2749]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:06:36.047844 kubelet[2749]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:06:36.047844 kubelet[2749]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:06:36.128491 kubelet[2749]: I0117 00:06:36.127694 2749 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:06:36.860357 kubelet[2749]: I0117 00:06:36.860316 2749 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 17 00:06:36.860609 kubelet[2749]: I0117 00:06:36.860505 2749 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:06:36.861789 kubelet[2749]: I0117 00:06:36.861770 2749 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 17 00:06:36.862602 kubelet[2749]: I0117 00:06:36.861864 2749 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:06:36.862602 kubelet[2749]: I0117 00:06:36.862142 2749 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:06:36.871850 kubelet[2749]: E0117 00:06:36.871808 2749 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:06:36.872838 kubelet[2749]: I0117 00:06:36.872810 2749 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:06:36.876821 kubelet[2749]: E0117 00:06:36.876779 2749 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:06:36.876916 kubelet[2749]: I0117 00:06:36.876864 2749 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 17 00:06:36.879870 kubelet[2749]: I0117 00:06:36.879832 2749 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 17 00:06:36.880134 kubelet[2749]: I0117 00:06:36.880102 2749 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:06:36.880273 kubelet[2749]: I0117 00:06:36.880132 2749 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-4c16a83c6c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:06:36.880351 kubelet[2749]: I0117 00:06:36.880277 2749 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:06:36.880351 kubelet[2749]: I0117 00:06:36.880286 2749 container_manager_linux.go:306] "Creating device plugin manager" Jan 17 00:06:36.880396 kubelet[2749]: I0117 00:06:36.880391 2749 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 17 00:06:36.885343 kubelet[2749]: I0117 00:06:36.885314 2749 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:06:36.886633 kubelet[2749]: I0117 00:06:36.886604 2749 kubelet.go:475] "Attempting to sync node with API server" Jan 17 00:06:36.886633 kubelet[2749]: I0117 00:06:36.886633 2749 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:06:36.887083 kubelet[2749]: I0117 00:06:36.887062 2749 kubelet.go:387] "Adding apiserver pod source" Jan 17 00:06:36.887118 kubelet[2749]: I0117 00:06:36.887088 2749 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:06:36.888232 kubelet[2749]: E0117 00:06:36.888199 2749 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-4c16a83c6c&limit=500&resourceVersion=0\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:06:36.888592 kubelet[2749]: I0117 00:06:36.888576 2749 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:06:36.889172 kubelet[2749]: I0117 00:06:36.889152 2749 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:06:36.889213 kubelet[2749]: I0117 00:06:36.889187 2749 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 17 00:06:36.889238 kubelet[2749]: W0117 00:06:36.889227 2749 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:06:36.895758 kubelet[2749]: I0117 00:06:36.895242 2749 server.go:1262] "Started kubelet" Jan 17 00:06:36.896675 kubelet[2749]: E0117 00:06:36.896650 2749 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:06:36.896809 kubelet[2749]: I0117 00:06:36.896784 2749 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:06:36.897533 kubelet[2749]: I0117 00:06:36.897512 2749 server.go:310] "Adding debug handlers to kubelet server" Jan 17 00:06:36.899600 kubelet[2749]: I0117 00:06:36.898348 2749 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:06:36.899600 kubelet[2749]: I0117 00:06:36.898420 2749 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 17 00:06:36.899600 kubelet[2749]: I0117 00:06:36.898840 2749 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:06:36.900493 kubelet[2749]: E0117 00:06:36.899458 2749 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.43:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.43:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-4c16a83c6c.188b5bf76bb6414a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-4c16a83c6c,UID:ci-4081.3.6-n-4c16a83c6c,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-4c16a83c6c,},FirstTimestamp:2026-01-17 00:06:36.895207754 +0000 UTC m=+0.883916969,LastTimestamp:2026-01-17 00:06:36.895207754 +0000 UTC m=+0.883916969,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-4c16a83c6c,}" Jan 17 00:06:36.902761 kubelet[2749]: I0117 00:06:36.902732 2749 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:06:36.904409 kubelet[2749]: I0117 00:06:36.904370 2749 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:06:36.908423 kubelet[2749]: I0117 00:06:36.908407 2749 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 17 00:06:36.909744 kubelet[2749]: E0117 00:06:36.909716 2749 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-4c16a83c6c\" not found" Jan 17 00:06:36.910238 kubelet[2749]: E0117 00:06:36.910211 2749 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-4c16a83c6c?timeout=10s\": dial tcp 10.200.20.43:6443: connect: connection refused" interval="200ms" Jan 17 00:06:36.910514 kubelet[2749]: E0117 00:06:36.910495 2749 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:06:36.911138 kubelet[2749]: I0117 00:06:36.911120 2749 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:06:36.911329 kubelet[2749]: I0117 00:06:36.911311 2749 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:06:36.912276 kubelet[2749]: I0117 00:06:36.912264 2749 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 00:06:36.912802 kubelet[2749]: E0117 00:06:36.912772 2749 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:06:36.913220 kubelet[2749]: I0117 00:06:36.913200 2749 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:06:36.915097 kubelet[2749]: I0117 00:06:36.915067 2749 reconciler.go:29] "Reconciler: start to sync state" Jan 17 00:06:36.942429 kubelet[2749]: I0117 00:06:36.942386 2749 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 17 00:06:36.944149 kubelet[2749]: I0117 00:06:36.943768 2749 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 17 00:06:36.944149 kubelet[2749]: I0117 00:06:36.943793 2749 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 17 00:06:36.944149 kubelet[2749]: I0117 00:06:36.943834 2749 kubelet.go:2427] "Starting kubelet main sync loop" Jan 17 00:06:36.944149 kubelet[2749]: E0117 00:06:36.943880 2749 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:06:36.946592 kubelet[2749]: E0117 00:06:36.946557 2749 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:06:37.010898 kubelet[2749]: E0117 00:06:37.010857 2749 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-4c16a83c6c\" not found" Jan 17 00:06:37.041112 kubelet[2749]: I0117 00:06:37.041085 2749 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:06:37.041112 kubelet[2749]: I0117 00:06:37.041103 2749 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:06:37.041112 kubelet[2749]: I0117 00:06:37.041120 2749 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:06:37.044086 kubelet[2749]: E0117 00:06:37.044057 2749 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 00:06:37.053681 kubelet[2749]: I0117 00:06:37.053655 2749 policy_none.go:49] "None policy: Start" Jan 17 00:06:37.053962 kubelet[2749]: I0117 00:06:37.053686 2749 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 17 00:06:37.053962 kubelet[2749]: I0117 00:06:37.053700 2749 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 17 00:06:37.064277 kubelet[2749]: I0117 00:06:37.064247 2749 policy_none.go:47] "Start" Jan 17 00:06:37.068577 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 00:06:37.079372 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 00:06:37.083645 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 00:06:37.092276 kubelet[2749]: E0117 00:06:37.089970 2749 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:06:37.092276 kubelet[2749]: I0117 00:06:37.090187 2749 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:06:37.092276 kubelet[2749]: I0117 00:06:37.090198 2749 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:06:37.092276 kubelet[2749]: I0117 00:06:37.090468 2749 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:06:37.095242 kubelet[2749]: E0117 00:06:37.093869 2749 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:06:37.095242 kubelet[2749]: E0117 00:06:37.093937 2749 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-4c16a83c6c\" not found" Jan 17 00:06:37.111184 kubelet[2749]: E0117 00:06:37.111067 2749 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-4c16a83c6c?timeout=10s\": dial tcp 10.200.20.43:6443: connect: connection refused" interval="400ms" Jan 17 00:06:37.192467 kubelet[2749]: I0117 00:06:37.192070 2749 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:37.192467 kubelet[2749]: E0117 00:06:37.192415 2749 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.43:6443/api/v1/nodes\": dial tcp 10.200.20.43:6443: connect: connection refused" node="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:37.317723 kubelet[2749]: I0117 00:06:37.317653 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5f99930f7eb3be88aa66293db6b5257e-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-4c16a83c6c\" (UID: \"5f99930f7eb3be88aa66293db6b5257e\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:37.394744 kubelet[2749]: I0117 00:06:37.394315 2749 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:37.394744 kubelet[2749]: E0117 00:06:37.394602 2749 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.43:6443/api/v1/nodes\": dial tcp 10.200.20.43:6443: connect: connection refused" node="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:37.512234 kubelet[2749]: E0117 00:06:37.512192 2749 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-4c16a83c6c?timeout=10s\": dial tcp 10.200.20.43:6443: connect: connection refused" interval="800ms" Jan 17 00:06:37.796347 kubelet[2749]: I0117 00:06:37.796307 2749 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:37.796712 kubelet[2749]: E0117 00:06:37.796687 2749 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.43:6443/api/v1/nodes\": dial tcp 10.200.20.43:6443: connect: connection refused" node="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:38.041362 kubelet[2749]: E0117 00:06:38.041319 2749 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-4c16a83c6c&limit=500&resourceVersion=0\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:06:38.524118 kubelet[2749]: E0117 00:06:38.053793 2749 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:06:38.524118 kubelet[2749]: E0117 00:06:38.089248 2749 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:06:38.524118 kubelet[2749]: E0117 00:06:38.312881 2749 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-4c16a83c6c?timeout=10s\": dial tcp 10.200.20.43:6443: connect: connection refused" interval="1.6s" Jan 17 00:06:38.524118 kubelet[2749]: E0117 00:06:38.466220 2749 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:06:38.598666 kubelet[2749]: I0117 00:06:38.598629 2749 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:38.599137 kubelet[2749]: E0117 00:06:38.599099 2749 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.43:6443/api/v1/nodes\": dial tcp 10.200.20.43:6443: connect: connection refused" node="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:38.637796 systemd[1]: Created slice kubepods-burstable-pod5f99930f7eb3be88aa66293db6b5257e.slice - libcontainer container kubepods-burstable-pod5f99930f7eb3be88aa66293db6b5257e.slice. Jan 17 00:06:38.655900 kubelet[2749]: E0117 00:06:38.655817 2749 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-4c16a83c6c\" not found" node="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:38.687068 containerd[1672]: time="2026-01-17T00:06:38.686995283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-4c16a83c6c,Uid:5f99930f7eb3be88aa66293db6b5257e,Namespace:kube-system,Attempt:0,}" Jan 17 00:06:38.696097 systemd[1]: Created slice kubepods-burstable-pod4cf731206fda78f1f72b9b08a89fbb68.slice - libcontainer container kubepods-burstable-pod4cf731206fda78f1f72b9b08a89fbb68.slice. Jan 17 00:06:38.698550 kubelet[2749]: E0117 00:06:38.698360 2749 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-4c16a83c6c\" not found" node="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:38.713697 systemd[1]: Created slice kubepods-burstable-podaf7cd513613dc61aabdf32325d2d4ac3.slice - libcontainer container kubepods-burstable-podaf7cd513613dc61aabdf32325d2d4ac3.slice. Jan 17 00:06:38.715845 kubelet[2749]: E0117 00:06:38.715813 2749 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-4c16a83c6c\" not found" node="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:38.726232 kubelet[2749]: I0117 00:06:38.726172 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af7cd513613dc61aabdf32325d2d4ac3-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-4c16a83c6c\" (UID: \"af7cd513613dc61aabdf32325d2d4ac3\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:38.726314 kubelet[2749]: I0117 00:06:38.726239 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/af7cd513613dc61aabdf32325d2d4ac3-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-4c16a83c6c\" (UID: \"af7cd513613dc61aabdf32325d2d4ac3\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:38.726314 kubelet[2749]: I0117 00:06:38.726259 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af7cd513613dc61aabdf32325d2d4ac3-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-4c16a83c6c\" (UID: \"af7cd513613dc61aabdf32325d2d4ac3\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:38.726314 kubelet[2749]: I0117 00:06:38.726274 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/af7cd513613dc61aabdf32325d2d4ac3-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-4c16a83c6c\" (UID: \"af7cd513613dc61aabdf32325d2d4ac3\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:38.726314 kubelet[2749]: I0117 00:06:38.726290 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af7cd513613dc61aabdf32325d2d4ac3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-4c16a83c6c\" (UID: \"af7cd513613dc61aabdf32325d2d4ac3\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:38.726410 kubelet[2749]: I0117 00:06:38.726316 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4cf731206fda78f1f72b9b08a89fbb68-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-4c16a83c6c\" (UID: \"4cf731206fda78f1f72b9b08a89fbb68\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:38.726410 kubelet[2749]: I0117 00:06:38.726353 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4cf731206fda78f1f72b9b08a89fbb68-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-4c16a83c6c\" (UID: \"4cf731206fda78f1f72b9b08a89fbb68\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:38.726410 kubelet[2749]: I0117 00:06:38.726369 2749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4cf731206fda78f1f72b9b08a89fbb68-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-4c16a83c6c\" (UID: \"4cf731206fda78f1f72b9b08a89fbb68\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:38.962819 kubelet[2749]: E0117 00:06:38.962705 2749 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:06:39.004269 containerd[1672]: time="2026-01-17T00:06:39.004225796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-4c16a83c6c,Uid:4cf731206fda78f1f72b9b08a89fbb68,Namespace:kube-system,Attempt:0,}" Jan 17 00:06:39.021285 containerd[1672]: time="2026-01-17T00:06:39.020978066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-4c16a83c6c,Uid:af7cd513613dc61aabdf32325d2d4ac3,Namespace:kube-system,Attempt:0,}" Jan 17 00:06:39.259469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount463398884.mount: Deactivated successfully. Jan 17 00:06:39.277741 containerd[1672]: time="2026-01-17T00:06:39.277689274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:06:39.279917 containerd[1672]: time="2026-01-17T00:06:39.279879318Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 17 00:06:39.281959 containerd[1672]: time="2026-01-17T00:06:39.281923961Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:06:39.284828 containerd[1672]: time="2026-01-17T00:06:39.284097885Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:06:39.286469 containerd[1672]: time="2026-01-17T00:06:39.286439129Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:06:39.289814 containerd[1672]: time="2026-01-17T00:06:39.288734013Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:06:39.291434 containerd[1672]: time="2026-01-17T00:06:39.291410978Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:06:39.297122 containerd[1672]: time="2026-01-17T00:06:39.297084988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:06:39.298060 containerd[1672]: time="2026-01-17T00:06:39.298018189Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 293.714592ms" Jan 17 00:06:39.299750 containerd[1672]: time="2026-01-17T00:06:39.299715032Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 612.238829ms" Jan 17 00:06:39.301767 containerd[1672]: time="2026-01-17T00:06:39.301724356Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 280.64313ms" Jan 17 00:06:39.911588 kubelet[2749]: E0117 00:06:39.911547 2749 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-4c16a83c6c&limit=500&resourceVersion=0\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:06:39.913867 kubelet[2749]: E0117 00:06:39.913836 2749 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-4c16a83c6c?timeout=10s\": dial tcp 10.200.20.43:6443: connect: connection refused" interval="3.2s" Jan 17 00:06:39.965679 containerd[1672]: time="2026-01-17T00:06:39.965558555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:06:39.966786 containerd[1672]: time="2026-01-17T00:06:39.966632357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:06:39.966786 containerd[1672]: time="2026-01-17T00:06:39.966693277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:39.967923 containerd[1672]: time="2026-01-17T00:06:39.967609878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:39.968776 containerd[1672]: time="2026-01-17T00:06:39.968693920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:06:39.968964 containerd[1672]: time="2026-01-17T00:06:39.968758960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:06:39.968964 containerd[1672]: time="2026-01-17T00:06:39.968773960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:39.968964 containerd[1672]: time="2026-01-17T00:06:39.968850721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:39.970501 containerd[1672]: time="2026-01-17T00:06:39.970358603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:06:39.970501 containerd[1672]: time="2026-01-17T00:06:39.970418443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:06:39.970885 containerd[1672]: time="2026-01-17T00:06:39.970433363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:39.970885 containerd[1672]: time="2026-01-17T00:06:39.970534764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:39.993292 systemd[1]: Started cri-containerd-60c466bb41f4c154af0e7badf30ea8314fd2bb9a6b84bb04e8d944d6f2647ece.scope - libcontainer container 60c466bb41f4c154af0e7badf30ea8314fd2bb9a6b84bb04e8d944d6f2647ece. Jan 17 00:06:40.001080 systemd[1]: Started cri-containerd-798e113e865b3796be782977920968485f5bc059ce6bf98286cd89a32333b406.scope - libcontainer container 798e113e865b3796be782977920968485f5bc059ce6bf98286cd89a32333b406. Jan 17 00:06:40.006692 systemd[1]: Started cri-containerd-1602d006610b32d1adf6f30b67a25f7bb9fdb8ade67578adeaa5d1ac715a7eae.scope - libcontainer container 1602d006610b32d1adf6f30b67a25f7bb9fdb8ade67578adeaa5d1ac715a7eae. Jan 17 00:06:40.048225 containerd[1672]: time="2026-01-17T00:06:40.048173979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-4c16a83c6c,Uid:4cf731206fda78f1f72b9b08a89fbb68,Namespace:kube-system,Attempt:0,} returns sandbox id \"1602d006610b32d1adf6f30b67a25f7bb9fdb8ade67578adeaa5d1ac715a7eae\"" Jan 17 00:06:40.063529 containerd[1672]: time="2026-01-17T00:06:40.063404206Z" level=info msg="CreateContainer within sandbox \"1602d006610b32d1adf6f30b67a25f7bb9fdb8ade67578adeaa5d1ac715a7eae\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:06:40.064875 containerd[1672]: time="2026-01-17T00:06:40.064827328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-4c16a83c6c,Uid:af7cd513613dc61aabdf32325d2d4ac3,Namespace:kube-system,Attempt:0,} returns sandbox id \"798e113e865b3796be782977920968485f5bc059ce6bf98286cd89a32333b406\"" Jan 17 00:06:40.070672 containerd[1672]: time="2026-01-17T00:06:40.070538738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-4c16a83c6c,Uid:5f99930f7eb3be88aa66293db6b5257e,Namespace:kube-system,Attempt:0,} returns sandbox id \"60c466bb41f4c154af0e7badf30ea8314fd2bb9a6b84bb04e8d944d6f2647ece\"" Jan 17 00:06:40.073672 containerd[1672]: time="2026-01-17T00:06:40.073200023Z" level=info msg="CreateContainer within sandbox \"798e113e865b3796be782977920968485f5bc059ce6bf98286cd89a32333b406\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:06:40.078880 containerd[1672]: time="2026-01-17T00:06:40.078848433Z" level=info msg="CreateContainer within sandbox \"60c466bb41f4c154af0e7badf30ea8314fd2bb9a6b84bb04e8d944d6f2647ece\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:06:40.139023 containerd[1672]: time="2026-01-17T00:06:40.138976618Z" level=info msg="CreateContainer within sandbox \"1602d006610b32d1adf6f30b67a25f7bb9fdb8ade67578adeaa5d1ac715a7eae\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5c0cfb6553fd4163b399146d22b87463e5dd6e5569b28ceb6f3371cb3920b880\"" Jan 17 00:06:40.140133 containerd[1672]: time="2026-01-17T00:06:40.140005419Z" level=info msg="StartContainer for \"5c0cfb6553fd4163b399146d22b87463e5dd6e5569b28ceb6f3371cb3920b880\"" Jan 17 00:06:40.143611 containerd[1672]: time="2026-01-17T00:06:40.143521546Z" level=info msg="CreateContainer within sandbox \"60c466bb41f4c154af0e7badf30ea8314fd2bb9a6b84bb04e8d944d6f2647ece\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9f1f50a03bb70b1e43dc76324d0d8a582af5ace864d41f124dc2cbbcf0e2fd92\"" Jan 17 00:06:40.144619 containerd[1672]: time="2026-01-17T00:06:40.144509187Z" level=info msg="StartContainer for \"9f1f50a03bb70b1e43dc76324d0d8a582af5ace864d41f124dc2cbbcf0e2fd92\"" Jan 17 00:06:40.146136 containerd[1672]: time="2026-01-17T00:06:40.146110110Z" level=info msg="CreateContainer within sandbox \"798e113e865b3796be782977920968485f5bc059ce6bf98286cd89a32333b406\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7b263f31955b7ae1d335a92621f3d781c6b1d0d5a179962408d3315f6a14625a\"" Jan 17 00:06:40.147823 containerd[1672]: time="2026-01-17T00:06:40.146803231Z" level=info msg="StartContainer for \"7b263f31955b7ae1d335a92621f3d781c6b1d0d5a179962408d3315f6a14625a\"" Jan 17 00:06:40.175650 systemd[1]: Started cri-containerd-5c0cfb6553fd4163b399146d22b87463e5dd6e5569b28ceb6f3371cb3920b880.scope - libcontainer container 5c0cfb6553fd4163b399146d22b87463e5dd6e5569b28ceb6f3371cb3920b880. Jan 17 00:06:40.187737 systemd[1]: Started cri-containerd-9f1f50a03bb70b1e43dc76324d0d8a582af5ace864d41f124dc2cbbcf0e2fd92.scope - libcontainer container 9f1f50a03bb70b1e43dc76324d0d8a582af5ace864d41f124dc2cbbcf0e2fd92. Jan 17 00:06:40.191988 systemd[1]: Started cri-containerd-7b263f31955b7ae1d335a92621f3d781c6b1d0d5a179962408d3315f6a14625a.scope - libcontainer container 7b263f31955b7ae1d335a92621f3d781c6b1d0d5a179962408d3315f6a14625a. Jan 17 00:06:40.204540 kubelet[2749]: I0117 00:06:40.203776 2749 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:40.204540 kubelet[2749]: E0117 00:06:40.204148 2749 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.43:6443/api/v1/nodes\": dial tcp 10.200.20.43:6443: connect: connection refused" node="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:40.242009 containerd[1672]: time="2026-01-17T00:06:40.241756837Z" level=info msg="StartContainer for \"5c0cfb6553fd4163b399146d22b87463e5dd6e5569b28ceb6f3371cb3920b880\" returns successfully" Jan 17 00:06:40.271868 containerd[1672]: time="2026-01-17T00:06:40.271708969Z" level=info msg="StartContainer for \"7b263f31955b7ae1d335a92621f3d781c6b1d0d5a179962408d3315f6a14625a\" returns successfully" Jan 17 00:06:40.277087 containerd[1672]: time="2026-01-17T00:06:40.277033859Z" level=info msg="StartContainer for \"9f1f50a03bb70b1e43dc76324d0d8a582af5ace864d41f124dc2cbbcf0e2fd92\" returns successfully" Jan 17 00:06:40.958579 kubelet[2749]: E0117 00:06:40.958379 2749 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-4c16a83c6c\" not found" node="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:40.960940 kubelet[2749]: E0117 00:06:40.960881 2749 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-4c16a83c6c\" not found" node="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:40.963061 kubelet[2749]: E0117 00:06:40.963020 2749 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-4c16a83c6c\" not found" node="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:41.965907 kubelet[2749]: E0117 00:06:41.965875 2749 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-4c16a83c6c\" not found" node="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:41.967123 kubelet[2749]: E0117 00:06:41.966271 2749 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-4c16a83c6c\" not found" node="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:42.959639 kubelet[2749]: E0117 00:06:42.959603 2749 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081.3.6-n-4c16a83c6c" not found Jan 17 00:06:43.127096 kubelet[2749]: E0117 00:06:43.127052 2749 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-4c16a83c6c\" not found" node="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:43.406682 kubelet[2749]: I0117 00:06:43.406396 2749 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:43.419535 kubelet[2749]: I0117 00:06:43.419500 2749 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:43.419535 kubelet[2749]: E0117 00:06:43.419540 2749 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-4c16a83c6c\": node \"ci-4081.3.6-n-4c16a83c6c\" not found" Jan 17 00:06:43.431527 kubelet[2749]: E0117 00:06:43.431389 2749 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-4c16a83c6c\" not found" Jan 17 00:06:43.532384 kubelet[2749]: E0117 00:06:43.532341 2749 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-4c16a83c6c\" not found" Jan 17 00:06:43.633215 kubelet[2749]: E0117 00:06:43.633161 2749 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-4c16a83c6c\" not found" Jan 17 00:06:43.733989 kubelet[2749]: E0117 00:06:43.733872 2749 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-4c16a83c6c\" not found" Jan 17 00:06:43.834535 kubelet[2749]: E0117 00:06:43.834499 2749 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-4c16a83c6c\" not found" Jan 17 00:06:43.935505 kubelet[2749]: E0117 00:06:43.935464 2749 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-4c16a83c6c\" not found" Jan 17 00:06:44.036428 kubelet[2749]: E0117 00:06:44.036384 2749 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-4c16a83c6c\" not found" Jan 17 00:06:44.057684 kubelet[2749]: E0117 00:06:44.057658 2749 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-4c16a83c6c\" not found" node="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:44.137294 kubelet[2749]: E0117 00:06:44.137254 2749 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-4c16a83c6c\" not found" Jan 17 00:06:44.237897 kubelet[2749]: E0117 00:06:44.237858 2749 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-4c16a83c6c\" not found" Jan 17 00:06:44.338436 kubelet[2749]: E0117 00:06:44.338115 2749 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-4c16a83c6c\" not found" Jan 17 00:06:44.438891 kubelet[2749]: E0117 00:06:44.438846 2749 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-4c16a83c6c\" not found" Jan 17 00:06:44.539479 kubelet[2749]: E0117 00:06:44.539438 2749 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-4c16a83c6c\" not found" Jan 17 00:06:44.640205 kubelet[2749]: E0117 00:06:44.640074 2749 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-4c16a83c6c\" not found" Jan 17 00:06:44.740830 kubelet[2749]: E0117 00:06:44.740796 2749 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-4c16a83c6c\" not found" Jan 17 00:06:44.841391 kubelet[2749]: E0117 00:06:44.841353 2749 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-4c16a83c6c\" not found" Jan 17 00:06:44.941563 kubelet[2749]: E0117 00:06:44.941448 2749 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-4c16a83c6c\" not found" Jan 17 00:06:45.042357 kubelet[2749]: E0117 00:06:45.042308 2749 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-4c16a83c6c\" not found" Jan 17 00:06:45.143146 kubelet[2749]: E0117 00:06:45.143098 2749 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-4c16a83c6c\" not found" Jan 17 00:06:45.243504 kubelet[2749]: E0117 00:06:45.243469 2749 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-4c16a83c6c\" not found" Jan 17 00:06:45.311706 kubelet[2749]: I0117 00:06:45.311376 2749 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:45.367975 kubelet[2749]: I0117 00:06:45.367906 2749 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:06:45.368317 kubelet[2749]: I0117 00:06:45.368300 2749 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:45.377035 kubelet[2749]: I0117 00:06:45.376782 2749 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:06:45.377035 kubelet[2749]: I0117 00:06:45.376868 2749 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:45.465703 kubelet[2749]: I0117 00:06:45.465663 2749 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:06:45.902786 kubelet[2749]: I0117 00:06:45.902535 2749 apiserver.go:52] "Watching apiserver" Jan 17 00:06:45.912986 kubelet[2749]: I0117 00:06:45.912956 2749 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 00:06:46.091811 systemd[1]: Reloading requested from client PID 3037 ('systemctl') (unit session-9.scope)... Jan 17 00:06:46.092088 systemd[1]: Reloading... Jan 17 00:06:46.207088 zram_generator::config[3080]: No configuration found. Jan 17 00:06:46.318352 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:06:46.412593 systemd[1]: Reloading finished in 320 ms. Jan 17 00:06:46.450230 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:06:46.463168 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:06:46.463387 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:06:46.463445 systemd[1]: kubelet.service: Consumed 1.148s CPU time, 125.4M memory peak, 0B memory swap peak. Jan 17 00:06:46.467386 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:06:46.578356 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:06:46.586500 (kubelet)[3141]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:06:46.627675 kubelet[3141]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:06:46.627675 kubelet[3141]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:06:46.628020 kubelet[3141]: I0117 00:06:46.627728 3141 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:06:46.633582 kubelet[3141]: I0117 00:06:46.633543 3141 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 17 00:06:46.633582 kubelet[3141]: I0117 00:06:46.633574 3141 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:06:46.633747 kubelet[3141]: I0117 00:06:46.633605 3141 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 17 00:06:46.633747 kubelet[3141]: I0117 00:06:46.633611 3141 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:06:46.633867 kubelet[3141]: I0117 00:06:46.633849 3141 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:06:46.635144 kubelet[3141]: I0117 00:06:46.635122 3141 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 17 00:06:46.639435 kubelet[3141]: I0117 00:06:46.639148 3141 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:06:46.643083 kubelet[3141]: E0117 00:06:46.643026 3141 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:06:46.643288 kubelet[3141]: I0117 00:06:46.643277 3141 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 17 00:06:46.646698 kubelet[3141]: I0117 00:06:46.646611 3141 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 17 00:06:46.648034 kubelet[3141]: I0117 00:06:46.646958 3141 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:06:46.648034 kubelet[3141]: I0117 00:06:46.646988 3141 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-4c16a83c6c","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:06:46.648034 kubelet[3141]: I0117 00:06:46.647168 3141 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:06:46.648034 kubelet[3141]: I0117 00:06:46.647178 3141 container_manager_linux.go:306] "Creating device plugin manager" Jan 17 00:06:46.648270 kubelet[3141]: I0117 00:06:46.647205 3141 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 17 00:06:46.648270 kubelet[3141]: I0117 00:06:46.647975 3141 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:06:46.648478 kubelet[3141]: I0117 00:06:46.648462 3141 kubelet.go:475] "Attempting to sync node with API server" Jan 17 00:06:46.648551 kubelet[3141]: I0117 00:06:46.648542 3141 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:06:46.648619 kubelet[3141]: I0117 00:06:46.648610 3141 kubelet.go:387] "Adding apiserver pod source" Jan 17 00:06:46.648690 kubelet[3141]: I0117 00:06:46.648680 3141 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:06:46.652058 kubelet[3141]: I0117 00:06:46.650535 3141 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:06:46.652058 kubelet[3141]: I0117 00:06:46.651200 3141 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:06:46.652058 kubelet[3141]: I0117 00:06:46.651230 3141 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 17 00:06:46.654035 kubelet[3141]: I0117 00:06:46.654002 3141 server.go:1262] "Started kubelet" Jan 17 00:06:46.655728 kubelet[3141]: I0117 00:06:46.655706 3141 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:06:46.663796 kubelet[3141]: I0117 00:06:46.663752 3141 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:06:46.664906 kubelet[3141]: I0117 00:06:46.664881 3141 server.go:310] "Adding debug handlers to kubelet server" Jan 17 00:06:46.666859 kubelet[3141]: I0117 00:06:46.666528 3141 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:06:46.667010 kubelet[3141]: I0117 00:06:46.666991 3141 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 17 00:06:46.669066 kubelet[3141]: I0117 00:06:46.667472 3141 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:06:46.669792 kubelet[3141]: I0117 00:06:46.669757 3141 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 17 00:06:46.670032 kubelet[3141]: E0117 00:06:46.670008 3141 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-4c16a83c6c\" not found" Jan 17 00:06:46.672051 kubelet[3141]: I0117 00:06:46.670722 3141 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:06:46.672525 kubelet[3141]: I0117 00:06:46.672498 3141 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 00:06:46.674056 kubelet[3141]: I0117 00:06:46.672641 3141 reconciler.go:29] "Reconciler: start to sync state" Jan 17 00:06:46.676168 kubelet[3141]: I0117 00:06:46.675179 3141 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 17 00:06:46.676228 kubelet[3141]: I0117 00:06:46.676177 3141 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 17 00:06:46.676228 kubelet[3141]: I0117 00:06:46.676193 3141 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 17 00:06:46.676228 kubelet[3141]: I0117 00:06:46.676214 3141 kubelet.go:2427] "Starting kubelet main sync loop" Jan 17 00:06:46.676309 kubelet[3141]: E0117 00:06:46.676254 3141 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:06:46.701650 kubelet[3141]: E0117 00:06:46.701613 3141 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:06:46.704172 kubelet[3141]: I0117 00:06:46.704145 3141 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:06:46.704283 kubelet[3141]: I0117 00:06:46.704262 3141 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:06:46.705850 kubelet[3141]: I0117 00:06:46.705711 3141 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:06:46.780237 kubelet[3141]: E0117 00:06:46.780191 3141 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 00:06:46.788655 kubelet[3141]: I0117 00:06:46.788626 3141 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:06:46.788655 kubelet[3141]: I0117 00:06:46.788645 3141 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:06:46.788821 kubelet[3141]: I0117 00:06:46.788671 3141 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:06:46.788846 kubelet[3141]: I0117 00:06:46.788819 3141 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:06:46.788868 kubelet[3141]: I0117 00:06:46.788827 3141 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:06:46.788868 kubelet[3141]: I0117 00:06:46.788853 3141 policy_none.go:49] "None policy: Start" Jan 17 00:06:46.788868 kubelet[3141]: I0117 00:06:46.788861 3141 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 17 00:06:46.788935 kubelet[3141]: I0117 00:06:46.788871 3141 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 17 00:06:46.788978 kubelet[3141]: I0117 00:06:46.788963 3141 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 17 00:06:46.788978 kubelet[3141]: I0117 00:06:46.788976 3141 policy_none.go:47] "Start" Jan 17 00:06:46.798205 kubelet[3141]: E0117 00:06:46.796579 3141 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:06:46.798205 kubelet[3141]: I0117 00:06:46.796776 3141 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:06:46.798205 kubelet[3141]: I0117 00:06:46.796789 3141 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:06:46.798205 kubelet[3141]: I0117 00:06:46.797058 3141 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:06:46.802347 kubelet[3141]: E0117 00:06:46.800145 3141 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:06:46.909980 kubelet[3141]: I0117 00:06:46.909877 3141 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:46.922179 kubelet[3141]: I0117 00:06:46.922141 3141 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:46.922305 kubelet[3141]: I0117 00:06:46.922234 3141 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:46.982111 kubelet[3141]: I0117 00:06:46.982067 3141 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:46.984077 kubelet[3141]: I0117 00:06:46.982291 3141 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:46.984077 kubelet[3141]: I0117 00:06:46.982534 3141 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:47.068617 kubelet[3141]: I0117 00:06:47.068566 3141 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:06:47.068741 kubelet[3141]: E0117 00:06:47.068624 3141 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-4c16a83c6c\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:47.068741 kubelet[3141]: I0117 00:06:47.068683 3141 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:06:47.068741 kubelet[3141]: E0117 00:06:47.068705 3141 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-4c16a83c6c\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:47.069383 kubelet[3141]: I0117 00:06:47.069360 3141 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:06:47.069466 kubelet[3141]: E0117 00:06:47.069395 3141 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-4c16a83c6c\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:47.175731 kubelet[3141]: I0117 00:06:47.175455 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5f99930f7eb3be88aa66293db6b5257e-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-4c16a83c6c\" (UID: \"5f99930f7eb3be88aa66293db6b5257e\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:47.175731 kubelet[3141]: I0117 00:06:47.175527 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4cf731206fda78f1f72b9b08a89fbb68-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-4c16a83c6c\" (UID: \"4cf731206fda78f1f72b9b08a89fbb68\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:47.175731 kubelet[3141]: I0117 00:06:47.175576 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4cf731206fda78f1f72b9b08a89fbb68-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-4c16a83c6c\" (UID: \"4cf731206fda78f1f72b9b08a89fbb68\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:47.175731 kubelet[3141]: I0117 00:06:47.175625 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af7cd513613dc61aabdf32325d2d4ac3-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-4c16a83c6c\" (UID: \"af7cd513613dc61aabdf32325d2d4ac3\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:47.175731 kubelet[3141]: I0117 00:06:47.175648 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/af7cd513613dc61aabdf32325d2d4ac3-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-4c16a83c6c\" (UID: \"af7cd513613dc61aabdf32325d2d4ac3\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:47.176126 kubelet[3141]: I0117 00:06:47.176101 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af7cd513613dc61aabdf32325d2d4ac3-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-4c16a83c6c\" (UID: \"af7cd513613dc61aabdf32325d2d4ac3\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:47.176178 kubelet[3141]: I0117 00:06:47.176142 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/af7cd513613dc61aabdf32325d2d4ac3-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-4c16a83c6c\" (UID: \"af7cd513613dc61aabdf32325d2d4ac3\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:47.176178 kubelet[3141]: I0117 00:06:47.176158 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4cf731206fda78f1f72b9b08a89fbb68-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-4c16a83c6c\" (UID: \"4cf731206fda78f1f72b9b08a89fbb68\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:47.176226 kubelet[3141]: I0117 00:06:47.176174 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af7cd513613dc61aabdf32325d2d4ac3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-4c16a83c6c\" (UID: \"af7cd513613dc61aabdf32325d2d4ac3\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:47.649859 kubelet[3141]: I0117 00:06:47.649820 3141 apiserver.go:52] "Watching apiserver" Jan 17 00:06:47.672745 kubelet[3141]: I0117 00:06:47.672704 3141 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 00:06:47.717932 kubelet[3141]: I0117 00:06:47.717749 3141 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:47.733228 kubelet[3141]: I0117 00:06:47.733200 3141 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:06:47.734265 kubelet[3141]: E0117 00:06:47.733427 3141 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-4c16a83c6c\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-4c16a83c6c" Jan 17 00:06:47.863572 kubelet[3141]: I0117 00:06:47.863237 3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-4c16a83c6c" podStartSLOduration=2.863217969 podStartE2EDuration="2.863217969s" podCreationTimestamp="2026-01-17 00:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:06:47.772761976 +0000 UTC m=+1.180616497" watchObservedRunningTime="2026-01-17 00:06:47.863217969 +0000 UTC m=+1.271072450" Jan 17 00:06:47.908804 kubelet[3141]: I0117 00:06:47.908671 3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-4c16a83c6c" podStartSLOduration=2.908654767 podStartE2EDuration="2.908654767s" podCreationTimestamp="2026-01-17 00:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:06:47.864507616 +0000 UTC m=+1.272362137" watchObservedRunningTime="2026-01-17 00:06:47.908654767 +0000 UTC m=+1.316509288" Jan 17 00:06:47.909031 kubelet[3141]: I0117 00:06:47.908807 3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-4c16a83c6c" podStartSLOduration=2.908803567 podStartE2EDuration="2.908803567s" podCreationTimestamp="2026-01-17 00:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:06:47.908480046 +0000 UTC m=+1.316334567" watchObservedRunningTime="2026-01-17 00:06:47.908803567 +0000 UTC m=+1.316658048" Jan 17 00:06:50.978090 kubelet[3141]: I0117 00:06:50.975752 3141 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:06:50.978443 containerd[1672]: time="2026-01-17T00:06:50.978038709Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:06:50.979295 kubelet[3141]: I0117 00:06:50.978861 3141 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:06:52.020954 systemd[1]: Created slice kubepods-besteffort-podc8bffa95_99b5_4c9d_9ef2_adf56c9d1762.slice - libcontainer container kubepods-besteffort-podc8bffa95_99b5_4c9d_9ef2_adf56c9d1762.slice. Jan 17 00:06:52.102370 kubelet[3141]: I0117 00:06:52.102224 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c8bffa95-99b5-4c9d-9ef2-adf56c9d1762-lib-modules\") pod \"kube-proxy-tmwvf\" (UID: \"c8bffa95-99b5-4c9d-9ef2-adf56c9d1762\") " pod="kube-system/kube-proxy-tmwvf" Jan 17 00:06:52.102370 kubelet[3141]: I0117 00:06:52.102270 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c8bffa95-99b5-4c9d-9ef2-adf56c9d1762-kube-proxy\") pod \"kube-proxy-tmwvf\" (UID: \"c8bffa95-99b5-4c9d-9ef2-adf56c9d1762\") " pod="kube-system/kube-proxy-tmwvf" Jan 17 00:06:52.102370 kubelet[3141]: I0117 00:06:52.102285 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c8bffa95-99b5-4c9d-9ef2-adf56c9d1762-xtables-lock\") pod \"kube-proxy-tmwvf\" (UID: \"c8bffa95-99b5-4c9d-9ef2-adf56c9d1762\") " pod="kube-system/kube-proxy-tmwvf" Jan 17 00:06:52.102370 kubelet[3141]: I0117 00:06:52.102304 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnp8q\" (UniqueName: \"kubernetes.io/projected/c8bffa95-99b5-4c9d-9ef2-adf56c9d1762-kube-api-access-rnp8q\") pod \"kube-proxy-tmwvf\" (UID: \"c8bffa95-99b5-4c9d-9ef2-adf56c9d1762\") " pod="kube-system/kube-proxy-tmwvf" Jan 17 00:06:52.240239 systemd[1]: Created slice kubepods-besteffort-podc2daaa5c_03fe_4b70_bfa9_af7076b18775.slice - libcontainer container kubepods-besteffort-podc2daaa5c_03fe_4b70_bfa9_af7076b18775.slice. Jan 17 00:06:52.304312 kubelet[3141]: I0117 00:06:52.303749 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sshhp\" (UniqueName: \"kubernetes.io/projected/c2daaa5c-03fe-4b70-bfa9-af7076b18775-kube-api-access-sshhp\") pod \"tigera-operator-65cdcdfd6d-pvxj4\" (UID: \"c2daaa5c-03fe-4b70-bfa9-af7076b18775\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-pvxj4" Jan 17 00:06:52.304312 kubelet[3141]: I0117 00:06:52.303795 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c2daaa5c-03fe-4b70-bfa9-af7076b18775-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-pvxj4\" (UID: \"c2daaa5c-03fe-4b70-bfa9-af7076b18775\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-pvxj4" Jan 17 00:06:52.334381 containerd[1672]: time="2026-01-17T00:06:52.334332070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tmwvf,Uid:c8bffa95-99b5-4c9d-9ef2-adf56c9d1762,Namespace:kube-system,Attempt:0,}" Jan 17 00:06:52.369017 containerd[1672]: time="2026-01-17T00:06:52.368449884Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:06:52.369017 containerd[1672]: time="2026-01-17T00:06:52.368507925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:06:52.369017 containerd[1672]: time="2026-01-17T00:06:52.368523205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:52.369017 containerd[1672]: time="2026-01-17T00:06:52.368602765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:52.385206 systemd[1]: run-containerd-runc-k8s.io-ea5bfe201296709269faa992276d2b57a9a5705fb7fa638d264ad8d9aa76280c-runc.3sxHWZ.mount: Deactivated successfully. Jan 17 00:06:52.394294 systemd[1]: Started cri-containerd-ea5bfe201296709269faa992276d2b57a9a5705fb7fa638d264ad8d9aa76280c.scope - libcontainer container ea5bfe201296709269faa992276d2b57a9a5705fb7fa638d264ad8d9aa76280c. Jan 17 00:06:52.424658 containerd[1672]: time="2026-01-17T00:06:52.424597974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tmwvf,Uid:c8bffa95-99b5-4c9d-9ef2-adf56c9d1762,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea5bfe201296709269faa992276d2b57a9a5705fb7fa638d264ad8d9aa76280c\"" Jan 17 00:06:52.431877 containerd[1672]: time="2026-01-17T00:06:52.431831185Z" level=info msg="CreateContainer within sandbox \"ea5bfe201296709269faa992276d2b57a9a5705fb7fa638d264ad8d9aa76280c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:06:52.460975 containerd[1672]: time="2026-01-17T00:06:52.460924952Z" level=info msg="CreateContainer within sandbox \"ea5bfe201296709269faa992276d2b57a9a5705fb7fa638d264ad8d9aa76280c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3ad65e31aaaa3afda93077e04b7de78636b08077bb0a15b593bd7c1fec19defb\"" Jan 17 00:06:52.462079 containerd[1672]: time="2026-01-17T00:06:52.461805433Z" level=info msg="StartContainer for \"3ad65e31aaaa3afda93077e04b7de78636b08077bb0a15b593bd7c1fec19defb\"" Jan 17 00:06:52.488295 systemd[1]: Started cri-containerd-3ad65e31aaaa3afda93077e04b7de78636b08077bb0a15b593bd7c1fec19defb.scope - libcontainer container 3ad65e31aaaa3afda93077e04b7de78636b08077bb0a15b593bd7c1fec19defb. Jan 17 00:06:52.515312 containerd[1672]: time="2026-01-17T00:06:52.515183278Z" level=info msg="StartContainer for \"3ad65e31aaaa3afda93077e04b7de78636b08077bb0a15b593bd7c1fec19defb\" returns successfully" Jan 17 00:06:52.549026 containerd[1672]: time="2026-01-17T00:06:52.548670452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-pvxj4,Uid:c2daaa5c-03fe-4b70-bfa9-af7076b18775,Namespace:tigera-operator,Attempt:0,}" Jan 17 00:06:52.588857 containerd[1672]: time="2026-01-17T00:06:52.587713234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:06:52.588857 containerd[1672]: time="2026-01-17T00:06:52.587766514Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:06:52.588857 containerd[1672]: time="2026-01-17T00:06:52.587785194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:52.588857 containerd[1672]: time="2026-01-17T00:06:52.587886154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:06:52.610216 systemd[1]: Started cri-containerd-6f13db5a43af681591b24ba5ea98620bee6aa17e8028baac16e85e70a077597d.scope - libcontainer container 6f13db5a43af681591b24ba5ea98620bee6aa17e8028baac16e85e70a077597d. Jan 17 00:06:52.641283 containerd[1672]: time="2026-01-17T00:06:52.641222559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-pvxj4,Uid:c2daaa5c-03fe-4b70-bfa9-af7076b18775,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6f13db5a43af681591b24ba5ea98620bee6aa17e8028baac16e85e70a077597d\"" Jan 17 00:06:52.643345 containerd[1672]: time="2026-01-17T00:06:52.643298482Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 17 00:06:54.313175 kubelet[3141]: I0117 00:06:54.313063 3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tmwvf" podStartSLOduration=3.313038724 podStartE2EDuration="3.313038724s" podCreationTimestamp="2026-01-17 00:06:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:06:52.755436818 +0000 UTC m=+6.163291339" watchObservedRunningTime="2026-01-17 00:06:54.313038724 +0000 UTC m=+7.720893245" Jan 17 00:06:54.550430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2819664614.mount: Deactivated successfully. Jan 17 00:06:55.276982 containerd[1672]: time="2026-01-17T00:06:55.276208854Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:55.278084 containerd[1672]: time="2026-01-17T00:06:55.278054137Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Jan 17 00:06:55.280548 containerd[1672]: time="2026-01-17T00:06:55.280516500Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:55.285102 containerd[1672]: time="2026-01-17T00:06:55.285057507Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:06:55.286165 containerd[1672]: time="2026-01-17T00:06:55.285919668Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.642588026s" Jan 17 00:06:55.286165 containerd[1672]: time="2026-01-17T00:06:55.285954628Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 17 00:06:55.292308 containerd[1672]: time="2026-01-17T00:06:55.292269878Z" level=info msg="CreateContainer within sandbox \"6f13db5a43af681591b24ba5ea98620bee6aa17e8028baac16e85e70a077597d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 00:06:55.311690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1697772771.mount: Deactivated successfully. Jan 17 00:06:55.319668 containerd[1672]: time="2026-01-17T00:06:55.319624879Z" level=info msg="CreateContainer within sandbox \"6f13db5a43af681591b24ba5ea98620bee6aa17e8028baac16e85e70a077597d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"62ddb3fad36a7ee51381ff5e5da0e6d0e8bca3d85d9aa970ae4228ddabeb2a0b\"" Jan 17 00:06:55.320213 containerd[1672]: time="2026-01-17T00:06:55.320187120Z" level=info msg="StartContainer for \"62ddb3fad36a7ee51381ff5e5da0e6d0e8bca3d85d9aa970ae4228ddabeb2a0b\"" Jan 17 00:06:55.342187 systemd[1]: Started cri-containerd-62ddb3fad36a7ee51381ff5e5da0e6d0e8bca3d85d9aa970ae4228ddabeb2a0b.scope - libcontainer container 62ddb3fad36a7ee51381ff5e5da0e6d0e8bca3d85d9aa970ae4228ddabeb2a0b. Jan 17 00:06:55.369657 containerd[1672]: time="2026-01-17T00:06:55.369605674Z" level=info msg="StartContainer for \"62ddb3fad36a7ee51381ff5e5da0e6d0e8bca3d85d9aa970ae4228ddabeb2a0b\" returns successfully" Jan 17 00:06:58.491795 kubelet[3141]: I0117 00:06:58.491735 3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-pvxj4" podStartSLOduration=3.847500146 podStartE2EDuration="6.491719455s" podCreationTimestamp="2026-01-17 00:06:52 +0000 UTC" firstStartedPulling="2026-01-17 00:06:52.642653561 +0000 UTC m=+6.050508082" lastFinishedPulling="2026-01-17 00:06:55.28687291 +0000 UTC m=+8.694727391" observedRunningTime="2026-01-17 00:06:55.75882858 +0000 UTC m=+9.166683101" watchObservedRunningTime="2026-01-17 00:06:58.491719455 +0000 UTC m=+11.899573976" Jan 17 00:07:01.534647 sudo[2185]: pam_unix(sudo:session): session closed for user root Jan 17 00:07:01.610766 sshd[2182]: pam_unix(sshd:session): session closed for user core Jan 17 00:07:01.614403 systemd-logind[1652]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:07:01.615589 systemd[1]: sshd@6-10.200.20.43:22-10.200.16.10:47650.service: Deactivated successfully. Jan 17 00:07:01.621240 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:07:01.621593 systemd[1]: session-9.scope: Consumed 6.288s CPU time, 149.9M memory peak, 0B memory swap peak. Jan 17 00:07:01.625444 systemd-logind[1652]: Removed session 9. Jan 17 00:07:11.644139 systemd[1]: Created slice kubepods-besteffort-podf987065a_2ca4_4196_8cac_e76069940fd7.slice - libcontainer container kubepods-besteffort-podf987065a_2ca4_4196_8cac_e76069940fd7.slice. Jan 17 00:07:11.723325 kubelet[3141]: I0117 00:07:11.723283 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f987065a-2ca4-4196-8cac-e76069940fd7-tigera-ca-bundle\") pod \"calico-typha-77486fb678-nmbtf\" (UID: \"f987065a-2ca4-4196-8cac-e76069940fd7\") " pod="calico-system/calico-typha-77486fb678-nmbtf" Jan 17 00:07:11.723325 kubelet[3141]: I0117 00:07:11.723323 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f987065a-2ca4-4196-8cac-e76069940fd7-typha-certs\") pod \"calico-typha-77486fb678-nmbtf\" (UID: \"f987065a-2ca4-4196-8cac-e76069940fd7\") " pod="calico-system/calico-typha-77486fb678-nmbtf" Jan 17 00:07:11.723325 kubelet[3141]: I0117 00:07:11.723342 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf72l\" (UniqueName: \"kubernetes.io/projected/f987065a-2ca4-4196-8cac-e76069940fd7-kube-api-access-bf72l\") pod \"calico-typha-77486fb678-nmbtf\" (UID: \"f987065a-2ca4-4196-8cac-e76069940fd7\") " pod="calico-system/calico-typha-77486fb678-nmbtf" Jan 17 00:07:11.886196 systemd[1]: Created slice kubepods-besteffort-poda43a9104_5ee2_43d5_b98a_7b31de1d11df.slice - libcontainer container kubepods-besteffort-poda43a9104_5ee2_43d5_b98a_7b31de1d11df.slice. Jan 17 00:07:11.925496 kubelet[3141]: I0117 00:07:11.925135 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a43a9104-5ee2-43d5-b98a-7b31de1d11df-cni-bin-dir\") pod \"calico-node-zfhcx\" (UID: \"a43a9104-5ee2-43d5-b98a-7b31de1d11df\") " pod="calico-system/calico-node-zfhcx" Jan 17 00:07:11.925496 kubelet[3141]: I0117 00:07:11.925179 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a43a9104-5ee2-43d5-b98a-7b31de1d11df-cni-net-dir\") pod \"calico-node-zfhcx\" (UID: \"a43a9104-5ee2-43d5-b98a-7b31de1d11df\") " pod="calico-system/calico-node-zfhcx" Jan 17 00:07:11.925496 kubelet[3141]: I0117 00:07:11.925193 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a43a9104-5ee2-43d5-b98a-7b31de1d11df-node-certs\") pod \"calico-node-zfhcx\" (UID: \"a43a9104-5ee2-43d5-b98a-7b31de1d11df\") " pod="calico-system/calico-node-zfhcx" Jan 17 00:07:11.925496 kubelet[3141]: I0117 00:07:11.925209 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a43a9104-5ee2-43d5-b98a-7b31de1d11df-cni-log-dir\") pod \"calico-node-zfhcx\" (UID: \"a43a9104-5ee2-43d5-b98a-7b31de1d11df\") " pod="calico-system/calico-node-zfhcx" Jan 17 00:07:11.925496 kubelet[3141]: I0117 00:07:11.925225 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a43a9104-5ee2-43d5-b98a-7b31de1d11df-lib-modules\") pod \"calico-node-zfhcx\" (UID: \"a43a9104-5ee2-43d5-b98a-7b31de1d11df\") " pod="calico-system/calico-node-zfhcx" Jan 17 00:07:11.925700 kubelet[3141]: I0117 00:07:11.925239 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a43a9104-5ee2-43d5-b98a-7b31de1d11df-tigera-ca-bundle\") pod \"calico-node-zfhcx\" (UID: \"a43a9104-5ee2-43d5-b98a-7b31de1d11df\") " pod="calico-system/calico-node-zfhcx" Jan 17 00:07:11.925700 kubelet[3141]: I0117 00:07:11.925256 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a43a9104-5ee2-43d5-b98a-7b31de1d11df-flexvol-driver-host\") pod \"calico-node-zfhcx\" (UID: \"a43a9104-5ee2-43d5-b98a-7b31de1d11df\") " pod="calico-system/calico-node-zfhcx" Jan 17 00:07:11.925700 kubelet[3141]: I0117 00:07:11.925271 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmr2k\" (UniqueName: \"kubernetes.io/projected/a43a9104-5ee2-43d5-b98a-7b31de1d11df-kube-api-access-cmr2k\") pod \"calico-node-zfhcx\" (UID: \"a43a9104-5ee2-43d5-b98a-7b31de1d11df\") " pod="calico-system/calico-node-zfhcx" Jan 17 00:07:11.925700 kubelet[3141]: I0117 00:07:11.925285 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a43a9104-5ee2-43d5-b98a-7b31de1d11df-policysync\") pod \"calico-node-zfhcx\" (UID: \"a43a9104-5ee2-43d5-b98a-7b31de1d11df\") " pod="calico-system/calico-node-zfhcx" Jan 17 00:07:11.925700 kubelet[3141]: I0117 00:07:11.925301 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a43a9104-5ee2-43d5-b98a-7b31de1d11df-xtables-lock\") pod \"calico-node-zfhcx\" (UID: \"a43a9104-5ee2-43d5-b98a-7b31de1d11df\") " pod="calico-system/calico-node-zfhcx" Jan 17 00:07:11.925807 kubelet[3141]: I0117 00:07:11.925314 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a43a9104-5ee2-43d5-b98a-7b31de1d11df-var-lib-calico\") pod \"calico-node-zfhcx\" (UID: \"a43a9104-5ee2-43d5-b98a-7b31de1d11df\") " pod="calico-system/calico-node-zfhcx" Jan 17 00:07:11.925807 kubelet[3141]: I0117 00:07:11.925328 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a43a9104-5ee2-43d5-b98a-7b31de1d11df-var-run-calico\") pod \"calico-node-zfhcx\" (UID: \"a43a9104-5ee2-43d5-b98a-7b31de1d11df\") " pod="calico-system/calico-node-zfhcx" Jan 17 00:07:11.952605 containerd[1672]: time="2026-01-17T00:07:11.952555097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77486fb678-nmbtf,Uid:f987065a-2ca4-4196-8cac-e76069940fd7,Namespace:calico-system,Attempt:0,}" Jan 17 00:07:12.034300 kubelet[3141]: E0117 00:07:12.034262 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.034536 kubelet[3141]: W0117 00:07:12.034457 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.034536 kubelet[3141]: E0117 00:07:12.034493 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.049717 kubelet[3141]: E0117 00:07:12.048955 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.049717 kubelet[3141]: W0117 00:07:12.048995 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.049717 kubelet[3141]: E0117 00:07:12.049014 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.131247 kubelet[3141]: E0117 00:07:12.130965 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v4lqg" podUID="b1f66b76-7db3-449d-92fa-faa5ceccc08b" Jan 17 00:07:12.219410 kubelet[3141]: E0117 00:07:12.219311 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.219778 kubelet[3141]: W0117 00:07:12.219535 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.219778 kubelet[3141]: E0117 00:07:12.219562 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.219992 kubelet[3141]: E0117 00:07:12.219980 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.220203 kubelet[3141]: W0117 00:07:12.220065 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.220203 kubelet[3141]: E0117 00:07:12.220112 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.220470 kubelet[3141]: E0117 00:07:12.220359 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.220470 kubelet[3141]: W0117 00:07:12.220371 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.220470 kubelet[3141]: E0117 00:07:12.220381 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.220632 kubelet[3141]: E0117 00:07:12.220620 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.220763 kubelet[3141]: W0117 00:07:12.220671 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.220763 kubelet[3141]: E0117 00:07:12.220684 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.221088 kubelet[3141]: E0117 00:07:12.220970 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.221088 kubelet[3141]: W0117 00:07:12.220986 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.221088 kubelet[3141]: E0117 00:07:12.220997 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.221292 kubelet[3141]: E0117 00:07:12.221254 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.221292 kubelet[3141]: W0117 00:07:12.221267 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.221292 kubelet[3141]: E0117 00:07:12.221277 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.221681 kubelet[3141]: E0117 00:07:12.221583 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.221681 kubelet[3141]: W0117 00:07:12.221595 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.221681 kubelet[3141]: E0117 00:07:12.221605 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.221961 kubelet[3141]: E0117 00:07:12.221842 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.221961 kubelet[3141]: W0117 00:07:12.221854 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.221961 kubelet[3141]: E0117 00:07:12.221864 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.222186 kubelet[3141]: E0117 00:07:12.222147 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.222186 kubelet[3141]: W0117 00:07:12.222160 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.222186 kubelet[3141]: E0117 00:07:12.222171 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.222553 kubelet[3141]: E0117 00:07:12.222460 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.222553 kubelet[3141]: W0117 00:07:12.222471 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.222553 kubelet[3141]: E0117 00:07:12.222480 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.222931 kubelet[3141]: E0117 00:07:12.222789 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.222931 kubelet[3141]: W0117 00:07:12.222802 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.222931 kubelet[3141]: E0117 00:07:12.222813 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.223133 kubelet[3141]: E0117 00:07:12.223110 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.223133 kubelet[3141]: W0117 00:07:12.223129 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.223202 kubelet[3141]: E0117 00:07:12.223144 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.223376 kubelet[3141]: E0117 00:07:12.223361 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.223376 kubelet[3141]: W0117 00:07:12.223375 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.223436 kubelet[3141]: E0117 00:07:12.223399 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.223635 kubelet[3141]: E0117 00:07:12.223616 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.223684 kubelet[3141]: W0117 00:07:12.223649 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.223684 kubelet[3141]: E0117 00:07:12.223660 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.223886 kubelet[3141]: E0117 00:07:12.223869 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.223886 kubelet[3141]: W0117 00:07:12.223885 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.223886 kubelet[3141]: E0117 00:07:12.223896 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.224150 kubelet[3141]: E0117 00:07:12.224137 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.224150 kubelet[3141]: W0117 00:07:12.224149 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.224213 kubelet[3141]: E0117 00:07:12.224159 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.224350 kubelet[3141]: E0117 00:07:12.224333 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.224350 kubelet[3141]: W0117 00:07:12.224348 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.224408 kubelet[3141]: E0117 00:07:12.224357 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.224532 kubelet[3141]: E0117 00:07:12.224518 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.224565 kubelet[3141]: W0117 00:07:12.224530 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.224594 kubelet[3141]: E0117 00:07:12.224570 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.225000 kubelet[3141]: E0117 00:07:12.224978 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.225000 kubelet[3141]: W0117 00:07:12.224997 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.225145 kubelet[3141]: E0117 00:07:12.225009 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.225956 kubelet[3141]: E0117 00:07:12.225339 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.225956 kubelet[3141]: W0117 00:07:12.225349 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.225956 kubelet[3141]: E0117 00:07:12.225359 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.228691 kubelet[3141]: E0117 00:07:12.228530 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.228691 kubelet[3141]: W0117 00:07:12.228557 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.228691 kubelet[3141]: E0117 00:07:12.228572 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.228691 kubelet[3141]: I0117 00:07:12.228591 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b1f66b76-7db3-449d-92fa-faa5ceccc08b-registration-dir\") pod \"csi-node-driver-v4lqg\" (UID: \"b1f66b76-7db3-449d-92fa-faa5ceccc08b\") " pod="calico-system/csi-node-driver-v4lqg" Jan 17 00:07:12.229274 kubelet[3141]: E0117 00:07:12.229007 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.229274 kubelet[3141]: W0117 00:07:12.229162 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.229629 kubelet[3141]: E0117 00:07:12.229437 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.229629 kubelet[3141]: I0117 00:07:12.229470 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1f66b76-7db3-449d-92fa-faa5ceccc08b-kubelet-dir\") pod \"csi-node-driver-v4lqg\" (UID: \"b1f66b76-7db3-449d-92fa-faa5ceccc08b\") " pod="calico-system/csi-node-driver-v4lqg" Jan 17 00:07:12.230645 containerd[1672]: time="2026-01-17T00:07:12.229305474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zfhcx,Uid:a43a9104-5ee2-43d5-b98a-7b31de1d11df,Namespace:calico-system,Attempt:0,}" Jan 17 00:07:12.231826 kubelet[3141]: E0117 00:07:12.231214 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.231826 kubelet[3141]: W0117 00:07:12.231240 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.231826 kubelet[3141]: E0117 00:07:12.231259 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.231826 kubelet[3141]: I0117 00:07:12.231285 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b1f66b76-7db3-449d-92fa-faa5ceccc08b-socket-dir\") pod \"csi-node-driver-v4lqg\" (UID: \"b1f66b76-7db3-449d-92fa-faa5ceccc08b\") " pod="calico-system/csi-node-driver-v4lqg" Jan 17 00:07:12.232753 kubelet[3141]: E0117 00:07:12.232723 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.232753 kubelet[3141]: W0117 00:07:12.232745 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.232846 kubelet[3141]: E0117 00:07:12.232762 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.233288 kubelet[3141]: E0117 00:07:12.232968 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.233288 kubelet[3141]: W0117 00:07:12.232982 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.233288 kubelet[3141]: E0117 00:07:12.232992 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.233288 kubelet[3141]: E0117 00:07:12.233197 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.233288 kubelet[3141]: W0117 00:07:12.233206 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.233288 kubelet[3141]: E0117 00:07:12.233215 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.234095 kubelet[3141]: E0117 00:07:12.233673 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.234095 kubelet[3141]: W0117 00:07:12.233685 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.234095 kubelet[3141]: E0117 00:07:12.233698 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.234095 kubelet[3141]: I0117 00:07:12.233960 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b1f66b76-7db3-449d-92fa-faa5ceccc08b-varrun\") pod \"csi-node-driver-v4lqg\" (UID: \"b1f66b76-7db3-449d-92fa-faa5ceccc08b\") " pod="calico-system/csi-node-driver-v4lqg" Jan 17 00:07:12.234632 kubelet[3141]: E0117 00:07:12.234181 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.234632 kubelet[3141]: W0117 00:07:12.234193 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.234632 kubelet[3141]: E0117 00:07:12.234204 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.236161 kubelet[3141]: E0117 00:07:12.236136 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.236161 kubelet[3141]: W0117 00:07:12.236157 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.236279 kubelet[3141]: E0117 00:07:12.236172 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.236439 kubelet[3141]: E0117 00:07:12.236423 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.236439 kubelet[3141]: W0117 00:07:12.236435 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.236527 kubelet[3141]: E0117 00:07:12.236446 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.236527 kubelet[3141]: I0117 00:07:12.236468 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t7rk\" (UniqueName: \"kubernetes.io/projected/b1f66b76-7db3-449d-92fa-faa5ceccc08b-kube-api-access-7t7rk\") pod \"csi-node-driver-v4lqg\" (UID: \"b1f66b76-7db3-449d-92fa-faa5ceccc08b\") " pod="calico-system/csi-node-driver-v4lqg" Jan 17 00:07:12.236754 kubelet[3141]: E0117 00:07:12.236728 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.236754 kubelet[3141]: W0117 00:07:12.236748 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.236842 kubelet[3141]: E0117 00:07:12.236760 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.237027 kubelet[3141]: E0117 00:07:12.236986 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.237027 kubelet[3141]: W0117 00:07:12.237001 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.237027 kubelet[3141]: E0117 00:07:12.237012 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.237315 kubelet[3141]: E0117 00:07:12.237296 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.237315 kubelet[3141]: W0117 00:07:12.237311 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.237414 kubelet[3141]: E0117 00:07:12.237322 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.237575 kubelet[3141]: E0117 00:07:12.237555 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.237575 kubelet[3141]: W0117 00:07:12.237570 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.237648 kubelet[3141]: E0117 00:07:12.237581 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.238145 kubelet[3141]: E0117 00:07:12.238057 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.238145 kubelet[3141]: W0117 00:07:12.238072 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.238145 kubelet[3141]: E0117 00:07:12.238085 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.244610 containerd[1672]: time="2026-01-17T00:07:12.244279374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:07:12.244610 containerd[1672]: time="2026-01-17T00:07:12.244348375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:07:12.244610 containerd[1672]: time="2026-01-17T00:07:12.244364335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:07:12.244610 containerd[1672]: time="2026-01-17T00:07:12.244440615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:07:12.266221 systemd[1]: Started cri-containerd-2649e7f4d4d50bda9cf321751a182327acf9c5df768a8f2ebcf1cd9aef9655b9.scope - libcontainer container 2649e7f4d4d50bda9cf321751a182327acf9c5df768a8f2ebcf1cd9aef9655b9. Jan 17 00:07:12.302411 containerd[1672]: time="2026-01-17T00:07:12.302336534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77486fb678-nmbtf,Uid:f987065a-2ca4-4196-8cac-e76069940fd7,Namespace:calico-system,Attempt:0,} returns sandbox id \"2649e7f4d4d50bda9cf321751a182327acf9c5df768a8f2ebcf1cd9aef9655b9\"" Jan 17 00:07:12.305091 containerd[1672]: time="2026-01-17T00:07:12.304994977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 17 00:07:12.337789 kubelet[3141]: E0117 00:07:12.337750 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.337789 kubelet[3141]: W0117 00:07:12.337777 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.337789 kubelet[3141]: E0117 00:07:12.337798 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.338089 kubelet[3141]: E0117 00:07:12.337985 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.338089 kubelet[3141]: W0117 00:07:12.337994 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.338089 kubelet[3141]: E0117 00:07:12.338005 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.338518 kubelet[3141]: E0117 00:07:12.338400 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.338518 kubelet[3141]: W0117 00:07:12.338420 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.338518 kubelet[3141]: E0117 00:07:12.338436 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.338802 kubelet[3141]: E0117 00:07:12.338730 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.338802 kubelet[3141]: W0117 00:07:12.338742 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.338802 kubelet[3141]: E0117 00:07:12.338753 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.339130 kubelet[3141]: E0117 00:07:12.339061 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.339130 kubelet[3141]: W0117 00:07:12.339074 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.339130 kubelet[3141]: E0117 00:07:12.339084 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.339262 kubelet[3141]: E0117 00:07:12.339243 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.339262 kubelet[3141]: W0117 00:07:12.339257 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.339447 kubelet[3141]: E0117 00:07:12.339268 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.339545 kubelet[3141]: E0117 00:07:12.339531 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.339545 kubelet[3141]: W0117 00:07:12.339543 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.339677 kubelet[3141]: E0117 00:07:12.339551 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.339710 kubelet[3141]: E0117 00:07:12.339700 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.339710 kubelet[3141]: W0117 00:07:12.339708 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.339893 kubelet[3141]: E0117 00:07:12.339716 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.340000 kubelet[3141]: E0117 00:07:12.339987 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.340000 kubelet[3141]: W0117 00:07:12.339998 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.340129 kubelet[3141]: E0117 00:07:12.340007 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.340221 kubelet[3141]: E0117 00:07:12.340207 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.340221 kubelet[3141]: W0117 00:07:12.340219 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.340324 kubelet[3141]: E0117 00:07:12.340229 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.340481 kubelet[3141]: E0117 00:07:12.340468 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.340481 kubelet[3141]: W0117 00:07:12.340479 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.340601 kubelet[3141]: E0117 00:07:12.340487 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.340652 kubelet[3141]: E0117 00:07:12.340634 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.340652 kubelet[3141]: W0117 00:07:12.340646 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.340841 kubelet[3141]: E0117 00:07:12.340661 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.340935 kubelet[3141]: E0117 00:07:12.340920 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.340935 kubelet[3141]: W0117 00:07:12.340932 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.341072 kubelet[3141]: E0117 00:07:12.340942 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.341186 kubelet[3141]: E0117 00:07:12.341170 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.341272 kubelet[3141]: W0117 00:07:12.341181 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.341272 kubelet[3141]: E0117 00:07:12.341207 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.341395 kubelet[3141]: E0117 00:07:12.341381 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.341395 kubelet[3141]: W0117 00:07:12.341392 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.341455 kubelet[3141]: E0117 00:07:12.341402 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.341610 kubelet[3141]: E0117 00:07:12.341596 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.341610 kubelet[3141]: W0117 00:07:12.341607 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.341678 kubelet[3141]: E0117 00:07:12.341617 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.341838 kubelet[3141]: E0117 00:07:12.341823 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.341838 kubelet[3141]: W0117 00:07:12.341835 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.341995 kubelet[3141]: E0117 00:07:12.341845 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.342135 kubelet[3141]: E0117 00:07:12.342120 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.342135 kubelet[3141]: W0117 00:07:12.342131 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.342284 kubelet[3141]: E0117 00:07:12.342140 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.342333 kubelet[3141]: E0117 00:07:12.342316 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.342333 kubelet[3141]: W0117 00:07:12.342326 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.342424 kubelet[3141]: E0117 00:07:12.342335 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.342519 kubelet[3141]: E0117 00:07:12.342506 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.342519 kubelet[3141]: W0117 00:07:12.342517 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.342602 kubelet[3141]: E0117 00:07:12.342526 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.342715 kubelet[3141]: E0117 00:07:12.342700 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.342715 kubelet[3141]: W0117 00:07:12.342712 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.342772 kubelet[3141]: E0117 00:07:12.342724 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.343186 kubelet[3141]: E0117 00:07:12.343167 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.343186 kubelet[3141]: W0117 00:07:12.343185 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.343300 kubelet[3141]: E0117 00:07:12.343196 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.343402 kubelet[3141]: E0117 00:07:12.343388 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.343402 kubelet[3141]: W0117 00:07:12.343400 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.343459 kubelet[3141]: E0117 00:07:12.343409 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.343636 kubelet[3141]: E0117 00:07:12.343622 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.343636 kubelet[3141]: W0117 00:07:12.343633 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.343706 kubelet[3141]: E0117 00:07:12.343642 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.344303 kubelet[3141]: E0117 00:07:12.344242 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.344303 kubelet[3141]: W0117 00:07:12.344257 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.344303 kubelet[3141]: E0117 00:07:12.344270 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.353630 kubelet[3141]: E0117 00:07:12.353606 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:12.353784 kubelet[3141]: W0117 00:07:12.353730 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:12.353784 kubelet[3141]: E0117 00:07:12.353754 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:12.537976 containerd[1672]: time="2026-01-17T00:07:12.537868055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:07:12.538710 containerd[1672]: time="2026-01-17T00:07:12.538584216Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:07:12.538710 containerd[1672]: time="2026-01-17T00:07:12.538642216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:07:12.539035 containerd[1672]: time="2026-01-17T00:07:12.538954696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:07:12.556367 systemd[1]: Started cri-containerd-68ac0f1098267186551bcaa98c148b8d27dcca33f43c7e32d072fdb9e3761bbd.scope - libcontainer container 68ac0f1098267186551bcaa98c148b8d27dcca33f43c7e32d072fdb9e3761bbd. Jan 17 00:07:12.577854 containerd[1672]: time="2026-01-17T00:07:12.577739229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zfhcx,Uid:a43a9104-5ee2-43d5-b98a-7b31de1d11df,Namespace:calico-system,Attempt:0,} returns sandbox id \"68ac0f1098267186551bcaa98c148b8d27dcca33f43c7e32d072fdb9e3761bbd\"" Jan 17 00:07:13.676853 kubelet[3141]: E0117 00:07:13.676785 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v4lqg" podUID="b1f66b76-7db3-449d-92fa-faa5ceccc08b" Jan 17 00:07:15.676880 kubelet[3141]: E0117 00:07:15.676793 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v4lqg" podUID="b1f66b76-7db3-449d-92fa-faa5ceccc08b" Jan 17 00:07:17.452184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2432122681.mount: Deactivated successfully. Jan 17 00:07:17.677125 kubelet[3141]: E0117 00:07:17.676809 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v4lqg" podUID="b1f66b76-7db3-449d-92fa-faa5ceccc08b" Jan 17 00:07:18.435070 containerd[1672]: time="2026-01-17T00:07:18.434991842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:07:18.482207 containerd[1672]: time="2026-01-17T00:07:18.482142314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Jan 17 00:07:18.529294 containerd[1672]: time="2026-01-17T00:07:18.529204306Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:07:18.575172 containerd[1672]: time="2026-01-17T00:07:18.575094975Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:07:18.575964 containerd[1672]: time="2026-01-17T00:07:18.575774776Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 6.270741279s" Jan 17 00:07:18.575964 containerd[1672]: time="2026-01-17T00:07:18.575810056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 17 00:07:18.577637 containerd[1672]: time="2026-01-17T00:07:18.577395979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 17 00:07:18.626633 containerd[1672]: time="2026-01-17T00:07:18.626433853Z" level=info msg="CreateContainer within sandbox \"2649e7f4d4d50bda9cf321751a182327acf9c5df768a8f2ebcf1cd9aef9655b9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 00:07:18.974739 containerd[1672]: time="2026-01-17T00:07:18.974691743Z" level=info msg="CreateContainer within sandbox \"2649e7f4d4d50bda9cf321751a182327acf9c5df768a8f2ebcf1cd9aef9655b9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6a02f793edcb118b52191d9b3d906099b9d01c63e02ff553e2ff84ff37cb6f71\"" Jan 17 00:07:18.976767 containerd[1672]: time="2026-01-17T00:07:18.975237344Z" level=info msg="StartContainer for \"6a02f793edcb118b52191d9b3d906099b9d01c63e02ff553e2ff84ff37cb6f71\"" Jan 17 00:07:19.006270 systemd[1]: Started cri-containerd-6a02f793edcb118b52191d9b3d906099b9d01c63e02ff553e2ff84ff37cb6f71.scope - libcontainer container 6a02f793edcb118b52191d9b3d906099b9d01c63e02ff553e2ff84ff37cb6f71. Jan 17 00:07:19.085101 containerd[1672]: time="2026-01-17T00:07:19.085038591Z" level=info msg="StartContainer for \"6a02f793edcb118b52191d9b3d906099b9d01c63e02ff553e2ff84ff37cb6f71\" returns successfully" Jan 17 00:07:19.677296 kubelet[3141]: E0117 00:07:19.677105 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v4lqg" podUID="b1f66b76-7db3-449d-92fa-faa5ceccc08b" Jan 17 00:07:19.804679 kubelet[3141]: I0117 00:07:19.804613 3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-77486fb678-nmbtf" podStartSLOduration=2.531818564 podStartE2EDuration="8.804597646s" podCreationTimestamp="2026-01-17 00:07:11 +0000 UTC" firstStartedPulling="2026-01-17 00:07:12.303901936 +0000 UTC m=+25.711756457" lastFinishedPulling="2026-01-17 00:07:18.576681058 +0000 UTC m=+31.984535539" observedRunningTime="2026-01-17 00:07:19.804329605 +0000 UTC m=+33.212184126" watchObservedRunningTime="2026-01-17 00:07:19.804597646 +0000 UTC m=+33.212452127" Jan 17 00:07:19.875533 kubelet[3141]: E0117 00:07:19.875403 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:19.875533 kubelet[3141]: W0117 00:07:19.875427 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:19.875533 kubelet[3141]: E0117 00:07:19.875449 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:19.875905 kubelet[3141]: E0117 00:07:19.875775 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:19.875905 kubelet[3141]: W0117 00:07:19.875787 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:19.875905 kubelet[3141]: E0117 00:07:19.875823 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:19.876212 kubelet[3141]: E0117 00:07:19.876097 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:19.876212 kubelet[3141]: W0117 00:07:19.876107 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:19.876212 kubelet[3141]: E0117 00:07:19.876118 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:19.877326 kubelet[3141]: E0117 00:07:19.876296 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:19.877326 kubelet[3141]: W0117 00:07:19.876305 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:19.877326 kubelet[3141]: E0117 00:07:19.876315 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:19.877326 kubelet[3141]: E0117 00:07:19.876484 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:19.877326 kubelet[3141]: W0117 00:07:19.876492 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:19.877326 kubelet[3141]: E0117 00:07:19.876501 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:19.877326 kubelet[3141]: E0117 00:07:19.876643 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:19.877326 kubelet[3141]: W0117 00:07:19.876650 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:19.877326 kubelet[3141]: E0117 00:07:19.876659 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:19.877326 kubelet[3141]: E0117 00:07:19.876807 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:19.877532 kubelet[3141]: W0117 00:07:19.876815 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:19.877532 kubelet[3141]: E0117 00:07:19.876822 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:19.877532 kubelet[3141]: E0117 00:07:19.876955 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:19.877532 kubelet[3141]: W0117 00:07:19.876961 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:19.877532 kubelet[3141]: E0117 00:07:19.876969 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:19.877532 kubelet[3141]: E0117 00:07:19.877120 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:19.877532 kubelet[3141]: W0117 00:07:19.877127 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:19.877532 kubelet[3141]: E0117 00:07:19.877135 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:19.877831 kubelet[3141]: E0117 00:07:19.877733 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:19.877831 kubelet[3141]: W0117 00:07:19.877745 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:19.877831 kubelet[3141]: E0117 00:07:19.877756 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:19.878099 kubelet[3141]: E0117 00:07:19.877910 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:19.878099 kubelet[3141]: W0117 00:07:19.877918 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:19.878099 kubelet[3141]: E0117 00:07:19.877928 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:19.878317 kubelet[3141]: E0117 00:07:19.878212 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:19.878317 kubelet[3141]: W0117 00:07:19.878224 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:19.878317 kubelet[3141]: E0117 00:07:19.878233 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:19.878479 kubelet[3141]: E0117 00:07:19.878467 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:19.878533 kubelet[3141]: W0117 00:07:19.878523 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:19.878585 kubelet[3141]: E0117 00:07:19.878575 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:19.878873 kubelet[3141]: E0117 00:07:19.878786 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:19.878873 kubelet[3141]: W0117 00:07:19.878797 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:19.878873 kubelet[3141]: E0117 00:07:19.878807 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:19.879035 kubelet[3141]: E0117 00:07:19.879023 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:19.879126 kubelet[3141]: W0117 00:07:19.879114 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:19.879230 kubelet[3141]: E0117 00:07:19.879172 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:19.891356 kubelet[3141]: E0117 00:07:19.891328 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:19.891575 kubelet[3141]: W0117 00:07:19.891455 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:19.891575 kubelet[3141]: E0117 00:07:19.891475 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:19.891876 kubelet[3141]: E0117 00:07:19.891766 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:19.891876 kubelet[3141]: W0117 00:07:19.891780 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:19.891876 kubelet[3141]: E0117 00:07:19.891790 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:19.892220 kubelet[3141]: E0117 00:07:19.892095 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:19.892220 kubelet[3141]: W0117 00:07:19.892106 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:19.892220 kubelet[3141]: E0117 00:07:19.892116 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:19.892744 kubelet[3141]: E0117 00:07:19.892631 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:19.892744 kubelet[3141]: W0117 00:07:19.892648 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:19.892744 kubelet[3141]: E0117 00:07:19.892659 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:19.893086 kubelet[3141]: E0117 00:07:19.892967 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:19.893086 kubelet[3141]: W0117 00:07:19.892978 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:19.893086 kubelet[3141]: E0117 00:07:19.892991 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:19.893370 kubelet[3141]: E0117 00:07:19.893244 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:19.893370 kubelet[3141]: W0117 00:07:19.893257 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:19.893370 kubelet[3141]: E0117 00:07:19.893267 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:19.893517 kubelet[3141]: E0117 00:07:19.893505 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:19.893570 kubelet[3141]: W0117 00:07:19.893560 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:19.893623 kubelet[3141]: E0117 00:07:19.893614 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:19.893957 kubelet[3141]: E0117 00:07:19.893930 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:19.893957 kubelet[3141]: W0117 00:07:19.893945 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:19.893957 kubelet[3141]: E0117 00:07:19.893958 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:19.894218 kubelet[3141]: E0117 00:07:19.894123 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:19.894218 kubelet[3141]: W0117 00:07:19.894136 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:19.894218 kubelet[3141]: E0117 00:07:19.894146 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:19.894324 kubelet[3141]: E0117 00:07:19.894267 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:19.894324 kubelet[3141]: W0117 00:07:19.894274 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:19.894324 kubelet[3141]: E0117 00:07:19.894282 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:19.894437 kubelet[3141]: E0117 00:07:19.894421 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:19.894437 kubelet[3141]: W0117 00:07:19.894431 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:19.894498 kubelet[3141]: E0117 00:07:19.894440 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:19.894745 kubelet[3141]: E0117 00:07:19.894727 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:19.894745 kubelet[3141]: W0117 00:07:19.894743 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:19.894807 kubelet[3141]: E0117 00:07:19.894753 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:19.894934 kubelet[3141]: E0117 00:07:19.894918 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:19.894934 kubelet[3141]: W0117 00:07:19.894930 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:19.894995 kubelet[3141]: E0117 00:07:19.894939 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:19.895105 kubelet[3141]: E0117 00:07:19.895090 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:19.895105 kubelet[3141]: W0117 00:07:19.895102 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:19.895178 kubelet[3141]: E0117 00:07:19.895111 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:19.895272 kubelet[3141]: E0117 00:07:19.895259 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:19.895272 kubelet[3141]: W0117 00:07:19.895269 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:19.895324 kubelet[3141]: E0117 00:07:19.895278 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:19.895456 kubelet[3141]: E0117 00:07:19.895443 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:19.895456 kubelet[3141]: W0117 00:07:19.895453 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:19.895515 kubelet[3141]: E0117 00:07:19.895462 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:19.895793 kubelet[3141]: E0117 00:07:19.895777 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:19.895793 kubelet[3141]: W0117 00:07:19.895789 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:19.895860 kubelet[3141]: E0117 00:07:19.895798 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:19.895962 kubelet[3141]: E0117 00:07:19.895949 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:19.895962 kubelet[3141]: W0117 00:07:19.895959 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:19.896019 kubelet[3141]: E0117 00:07:19.895968 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:20.792211 kubelet[3141]: I0117 00:07:20.792185 3141 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:07:20.885470 kubelet[3141]: E0117 00:07:20.885441 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:20.885470 kubelet[3141]: W0117 00:07:20.885463 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:20.885635 kubelet[3141]: E0117 00:07:20.885484 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:20.885635 kubelet[3141]: E0117 00:07:20.885624 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:20.885635 kubelet[3141]: W0117 00:07:20.885632 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:20.885702 kubelet[3141]: E0117 00:07:20.885641 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:20.885777 kubelet[3141]: E0117 00:07:20.885766 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:20.885777 kubelet[3141]: W0117 00:07:20.885775 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:20.885841 kubelet[3141]: E0117 00:07:20.885784 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:20.885932 kubelet[3141]: E0117 00:07:20.885921 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:20.885932 kubelet[3141]: W0117 00:07:20.885931 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:20.885993 kubelet[3141]: E0117 00:07:20.885940 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:20.886111 kubelet[3141]: E0117 00:07:20.886099 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:20.886111 kubelet[3141]: W0117 00:07:20.886109 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:20.886184 kubelet[3141]: E0117 00:07:20.886117 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:20.886253 kubelet[3141]: E0117 00:07:20.886242 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:20.886253 kubelet[3141]: W0117 00:07:20.886252 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:20.886309 kubelet[3141]: E0117 00:07:20.886265 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:20.886398 kubelet[3141]: E0117 00:07:20.886388 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:20.886398 kubelet[3141]: W0117 00:07:20.886397 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:20.886457 kubelet[3141]: E0117 00:07:20.886405 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:20.886534 kubelet[3141]: E0117 00:07:20.886524 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:20.886534 kubelet[3141]: W0117 00:07:20.886533 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:20.886592 kubelet[3141]: E0117 00:07:20.886541 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:20.886681 kubelet[3141]: E0117 00:07:20.886670 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:20.886681 kubelet[3141]: W0117 00:07:20.886680 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:20.886740 kubelet[3141]: E0117 00:07:20.886688 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:20.886812 kubelet[3141]: E0117 00:07:20.886802 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:20.886812 kubelet[3141]: W0117 00:07:20.886811 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:20.886865 kubelet[3141]: E0117 00:07:20.886819 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:20.886965 kubelet[3141]: E0117 00:07:20.886954 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:20.886965 kubelet[3141]: W0117 00:07:20.886964 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:20.887027 kubelet[3141]: E0117 00:07:20.886972 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:20.887182 kubelet[3141]: E0117 00:07:20.887141 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:20.887182 kubelet[3141]: W0117 00:07:20.887151 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:20.887182 kubelet[3141]: E0117 00:07:20.887160 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:20.887330 kubelet[3141]: E0117 00:07:20.887317 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:20.887330 kubelet[3141]: W0117 00:07:20.887328 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:20.887389 kubelet[3141]: E0117 00:07:20.887337 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:20.887479 kubelet[3141]: E0117 00:07:20.887468 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:20.887479 kubelet[3141]: W0117 00:07:20.887478 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:20.887527 kubelet[3141]: E0117 00:07:20.887486 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:20.887617 kubelet[3141]: E0117 00:07:20.887606 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:20.887617 kubelet[3141]: W0117 00:07:20.887616 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:20.887674 kubelet[3141]: E0117 00:07:20.887624 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:20.898017 kubelet[3141]: E0117 00:07:20.897953 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:20.898017 kubelet[3141]: W0117 00:07:20.897968 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:20.898017 kubelet[3141]: E0117 00:07:20.897981 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:20.898210 kubelet[3141]: E0117 00:07:20.898191 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:20.898210 kubelet[3141]: W0117 00:07:20.898205 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:20.898273 kubelet[3141]: E0117 00:07:20.898223 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:20.898406 kubelet[3141]: E0117 00:07:20.898394 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:20.898406 kubelet[3141]: W0117 00:07:20.898404 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:20.898467 kubelet[3141]: E0117 00:07:20.898414 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:20.898610 kubelet[3141]: E0117 00:07:20.898598 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:20.898645 kubelet[3141]: W0117 00:07:20.898609 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:20.898645 kubelet[3141]: E0117 00:07:20.898619 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:20.898765 kubelet[3141]: E0117 00:07:20.898754 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:20.898765 kubelet[3141]: W0117 00:07:20.898763 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:20.898836 kubelet[3141]: E0117 00:07:20.898771 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:20.898911 kubelet[3141]: E0117 00:07:20.898900 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:20.898911 kubelet[3141]: W0117 00:07:20.898910 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:20.898971 kubelet[3141]: E0117 00:07:20.898918 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:20.899099 kubelet[3141]: E0117 00:07:20.899088 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:20.899099 kubelet[3141]: W0117 00:07:20.899097 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:20.899164 kubelet[3141]: E0117 00:07:20.899107 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:20.899492 kubelet[3141]: E0117 00:07:20.899390 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:20.899492 kubelet[3141]: W0117 00:07:20.899403 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:20.899492 kubelet[3141]: E0117 00:07:20.899415 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:20.899853 kubelet[3141]: E0117 00:07:20.899735 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:20.899853 kubelet[3141]: W0117 00:07:20.899748 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:20.899853 kubelet[3141]: E0117 00:07:20.899760 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:20.900091 kubelet[3141]: E0117 00:07:20.900012 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:20.900091 kubelet[3141]: W0117 00:07:20.900023 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:20.900091 kubelet[3141]: E0117 00:07:20.900035 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:20.900422 kubelet[3141]: E0117 00:07:20.900343 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:20.900422 kubelet[3141]: W0117 00:07:20.900354 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:20.900422 kubelet[3141]: E0117 00:07:20.900366 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:20.900739 kubelet[3141]: E0117 00:07:20.900654 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:20.900739 kubelet[3141]: W0117 00:07:20.900666 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:20.900739 kubelet[3141]: E0117 00:07:20.900679 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:20.901102 kubelet[3141]: E0117 00:07:20.900979 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:20.901102 kubelet[3141]: W0117 00:07:20.900991 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:20.901102 kubelet[3141]: E0117 00:07:20.901001 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:20.901420 kubelet[3141]: E0117 00:07:20.901208 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:20.901420 kubelet[3141]: W0117 00:07:20.901222 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:20.901420 kubelet[3141]: E0117 00:07:20.901234 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:20.901420 kubelet[3141]: E0117 00:07:20.901373 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:20.901420 kubelet[3141]: W0117 00:07:20.901381 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:20.901420 kubelet[3141]: E0117 00:07:20.901389 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:20.901590 kubelet[3141]: E0117 00:07:20.901548 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:20.901590 kubelet[3141]: W0117 00:07:20.901556 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:20.901590 kubelet[3141]: E0117 00:07:20.901564 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:20.901862 kubelet[3141]: E0117 00:07:20.901847 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:20.901862 kubelet[3141]: W0117 00:07:20.901860 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:20.901947 kubelet[3141]: E0117 00:07:20.901869 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:20.902040 kubelet[3141]: E0117 00:07:20.902028 3141 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:07:20.902040 kubelet[3141]: W0117 00:07:20.902038 3141 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:07:20.902118 kubelet[3141]: E0117 00:07:20.902064 3141 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:07:20.985871 containerd[1672]: time="2026-01-17T00:07:20.985195722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:07:21.020713 containerd[1672]: time="2026-01-17T00:07:21.020672816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Jan 17 00:07:21.078341 containerd[1672]: time="2026-01-17T00:07:21.078231703Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:07:21.124559 containerd[1672]: time="2026-01-17T00:07:21.124483054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:07:21.125560 containerd[1672]: time="2026-01-17T00:07:21.125081375Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 2.547651836s" Jan 17 00:07:21.125560 containerd[1672]: time="2026-01-17T00:07:21.125116175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 17 00:07:21.134004 containerd[1672]: time="2026-01-17T00:07:21.133960588Z" level=info msg="CreateContainer within sandbox \"68ac0f1098267186551bcaa98c148b8d27dcca33f43c7e32d072fdb9e3761bbd\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 00:07:21.472861 containerd[1672]: time="2026-01-17T00:07:21.472517103Z" level=info msg="CreateContainer within sandbox \"68ac0f1098267186551bcaa98c148b8d27dcca33f43c7e32d072fdb9e3761bbd\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8b3c0654a873fb8acb9e3c27bc2c79a63e1615a2ebeb9805e674c1dd9cb63a91\"" Jan 17 00:07:21.474154 containerd[1672]: time="2026-01-17T00:07:21.474111666Z" level=info msg="StartContainer for \"8b3c0654a873fb8acb9e3c27bc2c79a63e1615a2ebeb9805e674c1dd9cb63a91\"" Jan 17 00:07:21.506199 systemd[1]: Started cri-containerd-8b3c0654a873fb8acb9e3c27bc2c79a63e1615a2ebeb9805e674c1dd9cb63a91.scope - libcontainer container 8b3c0654a873fb8acb9e3c27bc2c79a63e1615a2ebeb9805e674c1dd9cb63a91. Jan 17 00:07:21.536912 containerd[1672]: time="2026-01-17T00:07:21.536865641Z" level=info msg="StartContainer for \"8b3c0654a873fb8acb9e3c27bc2c79a63e1615a2ebeb9805e674c1dd9cb63a91\" returns successfully" Jan 17 00:07:21.547765 systemd[1]: cri-containerd-8b3c0654a873fb8acb9e3c27bc2c79a63e1615a2ebeb9805e674c1dd9cb63a91.scope: Deactivated successfully. Jan 17 00:07:21.569586 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b3c0654a873fb8acb9e3c27bc2c79a63e1615a2ebeb9805e674c1dd9cb63a91-rootfs.mount: Deactivated successfully. Jan 17 00:07:21.677197 kubelet[3141]: E0117 00:07:21.677155 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v4lqg" podUID="b1f66b76-7db3-449d-92fa-faa5ceccc08b" Jan 17 00:07:23.677393 kubelet[3141]: E0117 00:07:23.677348 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v4lqg" podUID="b1f66b76-7db3-449d-92fa-faa5ceccc08b" Jan 17 00:07:25.677074 kubelet[3141]: E0117 00:07:25.676997 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v4lqg" podUID="b1f66b76-7db3-449d-92fa-faa5ceccc08b" Jan 17 00:07:27.598693 kubelet[3141]: I0117 00:07:27.598657 3141 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:07:27.676749 kubelet[3141]: E0117 00:07:27.676675 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v4lqg" podUID="b1f66b76-7db3-449d-92fa-faa5ceccc08b" Jan 17 00:07:28.959479 containerd[1672]: time="2026-01-17T00:07:28.959404131Z" level=info msg="shim disconnected" id=8b3c0654a873fb8acb9e3c27bc2c79a63e1615a2ebeb9805e674c1dd9cb63a91 namespace=k8s.io Jan 17 00:07:28.959479 containerd[1672]: time="2026-01-17T00:07:28.959457492Z" level=warning msg="cleaning up after shim disconnected" id=8b3c0654a873fb8acb9e3c27bc2c79a63e1615a2ebeb9805e674c1dd9cb63a91 namespace=k8s.io Jan 17 00:07:28.960009 containerd[1672]: time="2026-01-17T00:07:28.959465372Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:07:29.677153 kubelet[3141]: E0117 00:07:29.677092 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v4lqg" podUID="b1f66b76-7db3-449d-92fa-faa5ceccc08b" Jan 17 00:07:29.813217 containerd[1672]: time="2026-01-17T00:07:29.813096403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 17 00:07:31.676796 kubelet[3141]: E0117 00:07:31.676737 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v4lqg" podUID="b1f66b76-7db3-449d-92fa-faa5ceccc08b" Jan 17 00:07:32.082079 containerd[1672]: time="2026-01-17T00:07:32.082004796Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:07:32.084301 containerd[1672]: time="2026-01-17T00:07:32.084253719Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 17 00:07:32.089325 containerd[1672]: time="2026-01-17T00:07:32.089276685Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:07:32.095599 containerd[1672]: time="2026-01-17T00:07:32.095542853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:07:32.096575 containerd[1672]: time="2026-01-17T00:07:32.096195894Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.283050931s" Jan 17 00:07:32.096575 containerd[1672]: time="2026-01-17T00:07:32.096232014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 17 00:07:32.103075 containerd[1672]: time="2026-01-17T00:07:32.103026023Z" level=info msg="CreateContainer within sandbox \"68ac0f1098267186551bcaa98c148b8d27dcca33f43c7e32d072fdb9e3761bbd\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 00:07:32.134542 containerd[1672]: time="2026-01-17T00:07:32.134496064Z" level=info msg="CreateContainer within sandbox \"68ac0f1098267186551bcaa98c148b8d27dcca33f43c7e32d072fdb9e3761bbd\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cea2fce93f098c34ef705e19938b3ed2b6da44b4e0beb3bae1f437004f84edfe\"" Jan 17 00:07:32.135865 containerd[1672]: time="2026-01-17T00:07:32.135832506Z" level=info msg="StartContainer for \"cea2fce93f098c34ef705e19938b3ed2b6da44b4e0beb3bae1f437004f84edfe\"" Jan 17 00:07:32.177225 systemd[1]: Started cri-containerd-cea2fce93f098c34ef705e19938b3ed2b6da44b4e0beb3bae1f437004f84edfe.scope - libcontainer container cea2fce93f098c34ef705e19938b3ed2b6da44b4e0beb3bae1f437004f84edfe. Jan 17 00:07:32.206251 containerd[1672]: time="2026-01-17T00:07:32.206201518Z" level=info msg="StartContainer for \"cea2fce93f098c34ef705e19938b3ed2b6da44b4e0beb3bae1f437004f84edfe\" returns successfully" Jan 17 00:07:33.344865 systemd[1]: cri-containerd-cea2fce93f098c34ef705e19938b3ed2b6da44b4e0beb3bae1f437004f84edfe.scope: Deactivated successfully. Jan 17 00:07:33.366039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cea2fce93f098c34ef705e19938b3ed2b6da44b4e0beb3bae1f437004f84edfe-rootfs.mount: Deactivated successfully. Jan 17 00:07:33.369832 kubelet[3141]: I0117 00:07:33.369698 3141 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 17 00:07:34.187025 systemd[1]: Created slice kubepods-besteffort-podf59d9319_e335_4bfc_a026_d8bbe3696e81.slice - libcontainer container kubepods-besteffort-podf59d9319_e335_4bfc_a026_d8bbe3696e81.slice. Jan 17 00:07:34.191952 containerd[1672]: time="2026-01-17T00:07:34.190701699Z" level=info msg="shim disconnected" id=cea2fce93f098c34ef705e19938b3ed2b6da44b4e0beb3bae1f437004f84edfe namespace=k8s.io Jan 17 00:07:34.191952 containerd[1672]: time="2026-01-17T00:07:34.190773059Z" level=warning msg="cleaning up after shim disconnected" id=cea2fce93f098c34ef705e19938b3ed2b6da44b4e0beb3bae1f437004f84edfe namespace=k8s.io Jan 17 00:07:34.191952 containerd[1672]: time="2026-01-17T00:07:34.190789539Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:07:34.198168 systemd[1]: Created slice kubepods-besteffort-podb1f66b76_7db3_449d_92fa_faa5ceccc08b.slice - libcontainer container kubepods-besteffort-podb1f66b76_7db3_449d_92fa_faa5ceccc08b.slice. Jan 17 00:07:34.209853 containerd[1672]: time="2026-01-17T00:07:34.209708724Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:07:34Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:07:34.213641 containerd[1672]: time="2026-01-17T00:07:34.213597129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v4lqg,Uid:b1f66b76-7db3-449d-92fa-faa5ceccc08b,Namespace:calico-system,Attempt:0,}" Jan 17 00:07:34.222614 systemd[1]: Created slice kubepods-burstable-podc75cd337_98e1_4c98_836d_ddd5677f5fcd.slice - libcontainer container kubepods-burstable-podc75cd337_98e1_4c98_836d_ddd5677f5fcd.slice. Jan 17 00:07:34.233270 systemd[1]: Created slice kubepods-besteffort-pod48afbc43_cbe2_4a92_9c9c_ba067e96302f.slice - libcontainer container kubepods-besteffort-pod48afbc43_cbe2_4a92_9c9c_ba067e96302f.slice. Jan 17 00:07:34.261065 systemd[1]: Created slice kubepods-burstable-pod0ad856fe_523a_4a16_bb22_1a01d08264e2.slice - libcontainer container kubepods-burstable-pod0ad856fe_523a_4a16_bb22_1a01d08264e2.slice. Jan 17 00:07:34.274879 systemd[1]: Created slice kubepods-besteffort-pod4d9310f4_1124_495b_a411_5323618ddd1d.slice - libcontainer container kubepods-besteffort-pod4d9310f4_1124_495b_a411_5323618ddd1d.slice. Jan 17 00:07:34.281832 systemd[1]: Created slice kubepods-besteffort-podbcc0dcb5_6cc0_4aca_b131_0866d93b8e20.slice - libcontainer container kubepods-besteffort-podbcc0dcb5_6cc0_4aca_b131_0866d93b8e20.slice. Jan 17 00:07:34.284237 kubelet[3141]: I0117 00:07:34.284189 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpjrx\" (UniqueName: \"kubernetes.io/projected/bcc0dcb5-6cc0-4aca-b131-0866d93b8e20-kube-api-access-qpjrx\") pod \"calico-apiserver-57d7d85589-ght5v\" (UID: \"bcc0dcb5-6cc0-4aca-b131-0866d93b8e20\") " pod="calico-apiserver/calico-apiserver-57d7d85589-ght5v" Jan 17 00:07:34.284772 kubelet[3141]: I0117 00:07:34.284701 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bcc0dcb5-6cc0-4aca-b131-0866d93b8e20-calico-apiserver-certs\") pod \"calico-apiserver-57d7d85589-ght5v\" (UID: \"bcc0dcb5-6cc0-4aca-b131-0866d93b8e20\") " pod="calico-apiserver/calico-apiserver-57d7d85589-ght5v" Jan 17 00:07:34.284830 kubelet[3141]: I0117 00:07:34.284784 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92047ce3-1e28-4b15-bb95-00e4947b1fab-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-hg9nz\" (UID: \"92047ce3-1e28-4b15-bb95-00e4947b1fab\") " pod="calico-system/goldmane-7c778bb748-hg9nz" Jan 17 00:07:34.284830 kubelet[3141]: I0117 00:07:34.284803 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ad856fe-523a-4a16-bb22-1a01d08264e2-config-volume\") pod \"coredns-66bc5c9577-m66dw\" (UID: \"0ad856fe-523a-4a16-bb22-1a01d08264e2\") " pod="kube-system/coredns-66bc5c9577-m66dw" Jan 17 00:07:34.284935 kubelet[3141]: I0117 00:07:34.284916 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/48afbc43-cbe2-4a92-9c9c-ba067e96302f-whisker-backend-key-pair\") pod \"whisker-bcb9d7d84-lvzcd\" (UID: \"48afbc43-cbe2-4a92-9c9c-ba067e96302f\") " pod="calico-system/whisker-bcb9d7d84-lvzcd" Jan 17 00:07:34.284979 kubelet[3141]: I0117 00:07:34.284940 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48afbc43-cbe2-4a92-9c9c-ba067e96302f-whisker-ca-bundle\") pod \"whisker-bcb9d7d84-lvzcd\" (UID: \"48afbc43-cbe2-4a92-9c9c-ba067e96302f\") " pod="calico-system/whisker-bcb9d7d84-lvzcd" Jan 17 00:07:34.284979 kubelet[3141]: I0117 00:07:34.284955 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxd6c\" (UniqueName: \"kubernetes.io/projected/0ad856fe-523a-4a16-bb22-1a01d08264e2-kube-api-access-bxd6c\") pod \"coredns-66bc5c9577-m66dw\" (UID: \"0ad856fe-523a-4a16-bb22-1a01d08264e2\") " pod="kube-system/coredns-66bc5c9577-m66dw" Jan 17 00:07:34.285076 kubelet[3141]: I0117 00:07:34.285060 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92047ce3-1e28-4b15-bb95-00e4947b1fab-config\") pod \"goldmane-7c778bb748-hg9nz\" (UID: \"92047ce3-1e28-4b15-bb95-00e4947b1fab\") " pod="calico-system/goldmane-7c778bb748-hg9nz" Jan 17 00:07:34.285118 kubelet[3141]: I0117 00:07:34.285086 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xrnk\" (UniqueName: \"kubernetes.io/projected/4d9310f4-1124-495b-a411-5323618ddd1d-kube-api-access-6xrnk\") pod \"calico-apiserver-57d7d85589-mrl7f\" (UID: \"4d9310f4-1124-495b-a411-5323618ddd1d\") " pod="calico-apiserver/calico-apiserver-57d7d85589-mrl7f" Jan 17 00:07:34.285118 kubelet[3141]: I0117 00:07:34.285113 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f59d9319-e335-4bfc-a026-d8bbe3696e81-tigera-ca-bundle\") pod \"calico-kube-controllers-894f9f8d4-b5lgh\" (UID: \"f59d9319-e335-4bfc-a026-d8bbe3696e81\") " pod="calico-system/calico-kube-controllers-894f9f8d4-b5lgh" Jan 17 00:07:34.285655 kubelet[3141]: I0117 00:07:34.285565 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh649\" (UniqueName: \"kubernetes.io/projected/92047ce3-1e28-4b15-bb95-00e4947b1fab-kube-api-access-xh649\") pod \"goldmane-7c778bb748-hg9nz\" (UID: \"92047ce3-1e28-4b15-bb95-00e4947b1fab\") " pod="calico-system/goldmane-7c778bb748-hg9nz" Jan 17 00:07:34.287564 kubelet[3141]: I0117 00:07:34.286742 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c75cd337-98e1-4c98-836d-ddd5677f5fcd-config-volume\") pod \"coredns-66bc5c9577-xzzqx\" (UID: \"c75cd337-98e1-4c98-836d-ddd5677f5fcd\") " pod="kube-system/coredns-66bc5c9577-xzzqx" Jan 17 00:07:34.287564 kubelet[3141]: I0117 00:07:34.286770 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4s2h\" (UniqueName: \"kubernetes.io/projected/48afbc43-cbe2-4a92-9c9c-ba067e96302f-kube-api-access-m4s2h\") pod \"whisker-bcb9d7d84-lvzcd\" (UID: \"48afbc43-cbe2-4a92-9c9c-ba067e96302f\") " pod="calico-system/whisker-bcb9d7d84-lvzcd" Jan 17 00:07:34.287564 kubelet[3141]: I0117 00:07:34.286797 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/92047ce3-1e28-4b15-bb95-00e4947b1fab-goldmane-key-pair\") pod \"goldmane-7c778bb748-hg9nz\" (UID: \"92047ce3-1e28-4b15-bb95-00e4947b1fab\") " pod="calico-system/goldmane-7c778bb748-hg9nz" Jan 17 00:07:34.287564 kubelet[3141]: I0117 00:07:34.286814 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4d9310f4-1124-495b-a411-5323618ddd1d-calico-apiserver-certs\") pod \"calico-apiserver-57d7d85589-mrl7f\" (UID: \"4d9310f4-1124-495b-a411-5323618ddd1d\") " pod="calico-apiserver/calico-apiserver-57d7d85589-mrl7f" Jan 17 00:07:34.287564 kubelet[3141]: I0117 00:07:34.286831 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mf6m\" (UniqueName: \"kubernetes.io/projected/c75cd337-98e1-4c98-836d-ddd5677f5fcd-kube-api-access-5mf6m\") pod \"coredns-66bc5c9577-xzzqx\" (UID: \"c75cd337-98e1-4c98-836d-ddd5677f5fcd\") " pod="kube-system/coredns-66bc5c9577-xzzqx" Jan 17 00:07:34.287724 kubelet[3141]: I0117 00:07:34.286845 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59prz\" (UniqueName: \"kubernetes.io/projected/f59d9319-e335-4bfc-a026-d8bbe3696e81-kube-api-access-59prz\") pod \"calico-kube-controllers-894f9f8d4-b5lgh\" (UID: \"f59d9319-e335-4bfc-a026-d8bbe3696e81\") " pod="calico-system/calico-kube-controllers-894f9f8d4-b5lgh" Jan 17 00:07:34.293330 systemd[1]: Created slice kubepods-besteffort-pod92047ce3_1e28_4b15_bb95_00e4947b1fab.slice - libcontainer container kubepods-besteffort-pod92047ce3_1e28_4b15_bb95_00e4947b1fab.slice. Jan 17 00:07:34.326470 containerd[1672]: time="2026-01-17T00:07:34.326406156Z" level=error msg="Failed to destroy network for sandbox \"c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.326814 containerd[1672]: time="2026-01-17T00:07:34.326782796Z" level=error msg="encountered an error cleaning up failed sandbox \"c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.326859 containerd[1672]: time="2026-01-17T00:07:34.326843996Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v4lqg,Uid:b1f66b76-7db3-449d-92fa-faa5ceccc08b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.327127 kubelet[3141]: E0117 00:07:34.327087 3141 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.327203 kubelet[3141]: E0117 00:07:34.327155 3141 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v4lqg" Jan 17 00:07:34.327203 kubelet[3141]: E0117 00:07:34.327177 3141 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v4lqg" Jan 17 00:07:34.327261 kubelet[3141]: E0117 00:07:34.327229 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-v4lqg_calico-system(b1f66b76-7db3-449d-92fa-faa5ceccc08b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-v4lqg_calico-system(b1f66b76-7db3-449d-92fa-faa5ceccc08b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v4lqg" podUID="b1f66b76-7db3-449d-92fa-faa5ceccc08b" Jan 17 00:07:34.328770 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d-shm.mount: Deactivated successfully. Jan 17 00:07:34.497645 containerd[1672]: time="2026-01-17T00:07:34.497272018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-894f9f8d4-b5lgh,Uid:f59d9319-e335-4bfc-a026-d8bbe3696e81,Namespace:calico-system,Attempt:0,}" Jan 17 00:07:34.535874 containerd[1672]: time="2026-01-17T00:07:34.535835708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xzzqx,Uid:c75cd337-98e1-4c98-836d-ddd5677f5fcd,Namespace:kube-system,Attempt:0,}" Jan 17 00:07:34.545171 containerd[1672]: time="2026-01-17T00:07:34.545132120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bcb9d7d84-lvzcd,Uid:48afbc43-cbe2-4a92-9c9c-ba067e96302f,Namespace:calico-system,Attempt:0,}" Jan 17 00:07:34.565141 containerd[1672]: time="2026-01-17T00:07:34.564930786Z" level=error msg="Failed to destroy network for sandbox \"9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.565317 containerd[1672]: time="2026-01-17T00:07:34.565256947Z" level=error msg="encountered an error cleaning up failed sandbox \"9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.565357 containerd[1672]: time="2026-01-17T00:07:34.565317867Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-894f9f8d4-b5lgh,Uid:f59d9319-e335-4bfc-a026-d8bbe3696e81,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.566137 kubelet[3141]: E0117 00:07:34.565494 3141 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.566137 kubelet[3141]: E0117 00:07:34.565560 3141 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-894f9f8d4-b5lgh" Jan 17 00:07:34.566137 kubelet[3141]: E0117 00:07:34.565580 3141 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-894f9f8d4-b5lgh" Jan 17 00:07:34.566555 kubelet[3141]: E0117 00:07:34.565625 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-894f9f8d4-b5lgh_calico-system(f59d9319-e335-4bfc-a026-d8bbe3696e81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-894f9f8d4-b5lgh_calico-system(f59d9319-e335-4bfc-a026-d8bbe3696e81)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-894f9f8d4-b5lgh" podUID="f59d9319-e335-4bfc-a026-d8bbe3696e81" Jan 17 00:07:34.571230 containerd[1672]: time="2026-01-17T00:07:34.571197514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-m66dw,Uid:0ad856fe-523a-4a16-bb22-1a01d08264e2,Namespace:kube-system,Attempt:0,}" Jan 17 00:07:34.589917 containerd[1672]: time="2026-01-17T00:07:34.589822818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d7d85589-mrl7f,Uid:4d9310f4-1124-495b-a411-5323618ddd1d,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:07:34.597905 containerd[1672]: time="2026-01-17T00:07:34.597720469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d7d85589-ght5v,Uid:bcc0dcb5-6cc0-4aca-b131-0866d93b8e20,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:07:34.605918 containerd[1672]: time="2026-01-17T00:07:34.605781519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-hg9nz,Uid:92047ce3-1e28-4b15-bb95-00e4947b1fab,Namespace:calico-system,Attempt:0,}" Jan 17 00:07:34.627342 containerd[1672]: time="2026-01-17T00:07:34.627295787Z" level=error msg="Failed to destroy network for sandbox \"94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.627878 containerd[1672]: time="2026-01-17T00:07:34.627733228Z" level=error msg="encountered an error cleaning up failed sandbox \"94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.627878 containerd[1672]: time="2026-01-17T00:07:34.627784428Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xzzqx,Uid:c75cd337-98e1-4c98-836d-ddd5677f5fcd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.628088 kubelet[3141]: E0117 00:07:34.628020 3141 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.628153 kubelet[3141]: E0117 00:07:34.628108 3141 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xzzqx" Jan 17 00:07:34.628153 kubelet[3141]: E0117 00:07:34.628126 3141 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-xzzqx" Jan 17 00:07:34.628242 kubelet[3141]: E0117 00:07:34.628176 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-xzzqx_kube-system(c75cd337-98e1-4c98-836d-ddd5677f5fcd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-xzzqx_kube-system(c75cd337-98e1-4c98-836d-ddd5677f5fcd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-xzzqx" podUID="c75cd337-98e1-4c98-836d-ddd5677f5fcd" Jan 17 00:07:34.684565 containerd[1672]: time="2026-01-17T00:07:34.684458382Z" level=error msg="Failed to destroy network for sandbox \"07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.685180 containerd[1672]: time="2026-01-17T00:07:34.684899662Z" level=error msg="encountered an error cleaning up failed sandbox \"07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.685180 containerd[1672]: time="2026-01-17T00:07:34.684963742Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bcb9d7d84-lvzcd,Uid:48afbc43-cbe2-4a92-9c9c-ba067e96302f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.685330 kubelet[3141]: E0117 00:07:34.685195 3141 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.685330 kubelet[3141]: E0117 00:07:34.685277 3141 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bcb9d7d84-lvzcd" Jan 17 00:07:34.685330 kubelet[3141]: E0117 00:07:34.685298 3141 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bcb9d7d84-lvzcd" Jan 17 00:07:34.685446 kubelet[3141]: E0117 00:07:34.685361 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-bcb9d7d84-lvzcd_calico-system(48afbc43-cbe2-4a92-9c9c-ba067e96302f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-bcb9d7d84-lvzcd_calico-system(48afbc43-cbe2-4a92-9c9c-ba067e96302f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-bcb9d7d84-lvzcd" podUID="48afbc43-cbe2-4a92-9c9c-ba067e96302f" Jan 17 00:07:34.779489 containerd[1672]: time="2026-01-17T00:07:34.779295585Z" level=error msg="Failed to destroy network for sandbox \"104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.782025 containerd[1672]: time="2026-01-17T00:07:34.781855668Z" level=error msg="encountered an error cleaning up failed sandbox \"104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.782025 containerd[1672]: time="2026-01-17T00:07:34.781922548Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-m66dw,Uid:0ad856fe-523a-4a16-bb22-1a01d08264e2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.782255 kubelet[3141]: E0117 00:07:34.782177 3141 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.782255 kubelet[3141]: E0117 00:07:34.782231 3141 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-m66dw" Jan 17 00:07:34.782255 kubelet[3141]: E0117 00:07:34.782250 3141 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-m66dw" Jan 17 00:07:34.782365 kubelet[3141]: E0117 00:07:34.782302 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-m66dw_kube-system(0ad856fe-523a-4a16-bb22-1a01d08264e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-m66dw_kube-system(0ad856fe-523a-4a16-bb22-1a01d08264e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-m66dw" podUID="0ad856fe-523a-4a16-bb22-1a01d08264e2" Jan 17 00:07:34.808618 containerd[1672]: time="2026-01-17T00:07:34.808568223Z" level=error msg="Failed to destroy network for sandbox \"192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.810330 containerd[1672]: time="2026-01-17T00:07:34.809110664Z" level=error msg="Failed to destroy network for sandbox \"11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.810514 containerd[1672]: time="2026-01-17T00:07:34.810451665Z" level=error msg="encountered an error cleaning up failed sandbox \"11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.810555 containerd[1672]: time="2026-01-17T00:07:34.810510506Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-hg9nz,Uid:92047ce3-1e28-4b15-bb95-00e4947b1fab,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.811250 containerd[1672]: time="2026-01-17T00:07:34.810857506Z" level=error msg="encountered an error cleaning up failed sandbox \"192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.811250 containerd[1672]: time="2026-01-17T00:07:34.810920466Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d7d85589-mrl7f,Uid:4d9310f4-1124-495b-a411-5323618ddd1d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.811495 kubelet[3141]: E0117 00:07:34.810726 3141 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.811495 kubelet[3141]: E0117 00:07:34.810780 3141 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-hg9nz" Jan 17 00:07:34.811495 kubelet[3141]: E0117 00:07:34.810798 3141 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-hg9nz" Jan 17 00:07:34.811728 kubelet[3141]: E0117 00:07:34.810850 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-hg9nz_calico-system(92047ce3-1e28-4b15-bb95-00e4947b1fab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-hg9nz_calico-system(92047ce3-1e28-4b15-bb95-00e4947b1fab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-hg9nz" podUID="92047ce3-1e28-4b15-bb95-00e4947b1fab" Jan 17 00:07:34.812466 kubelet[3141]: E0117 00:07:34.812424 3141 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.812564 kubelet[3141]: E0117 00:07:34.812475 3141 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57d7d85589-mrl7f" Jan 17 00:07:34.812564 kubelet[3141]: E0117 00:07:34.812502 3141 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57d7d85589-mrl7f" Jan 17 00:07:34.812564 kubelet[3141]: E0117 00:07:34.812548 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57d7d85589-mrl7f_calico-apiserver(4d9310f4-1124-495b-a411-5323618ddd1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57d7d85589-mrl7f_calico-apiserver(4d9310f4-1124-495b-a411-5323618ddd1d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57d7d85589-mrl7f" podUID="4d9310f4-1124-495b-a411-5323618ddd1d" Jan 17 00:07:34.814074 containerd[1672]: time="2026-01-17T00:07:34.814017430Z" level=error msg="Failed to destroy network for sandbox \"01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.814540 containerd[1672]: time="2026-01-17T00:07:34.814422911Z" level=error msg="encountered an error cleaning up failed sandbox \"01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.814540 containerd[1672]: time="2026-01-17T00:07:34.814489351Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d7d85589-ght5v,Uid:bcc0dcb5-6cc0-4aca-b131-0866d93b8e20,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.815014 kubelet[3141]: E0117 00:07:34.814731 3141 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.815014 kubelet[3141]: E0117 00:07:34.814770 3141 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57d7d85589-ght5v" Jan 17 00:07:34.815014 kubelet[3141]: E0117 00:07:34.814788 3141 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57d7d85589-ght5v" Jan 17 00:07:34.815142 kubelet[3141]: E0117 00:07:34.814838 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57d7d85589-ght5v_calico-apiserver(bcc0dcb5-6cc0-4aca-b131-0866d93b8e20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57d7d85589-ght5v_calico-apiserver(bcc0dcb5-6cc0-4aca-b131-0866d93b8e20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57d7d85589-ght5v" podUID="bcc0dcb5-6cc0-4aca-b131-0866d93b8e20" Jan 17 00:07:34.828899 kubelet[3141]: I0117 00:07:34.828874 3141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" Jan 17 00:07:34.830449 containerd[1672]: time="2026-01-17T00:07:34.830284811Z" level=info msg="StopPodSandbox for \"01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9\"" Jan 17 00:07:34.830695 containerd[1672]: time="2026-01-17T00:07:34.830668612Z" level=info msg="Ensure that sandbox 01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9 in task-service has been cleanup successfully" Jan 17 00:07:34.830934 kubelet[3141]: I0117 00:07:34.830901 3141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" Jan 17 00:07:34.832172 containerd[1672]: time="2026-01-17T00:07:34.832074614Z" level=info msg="StopPodSandbox for \"c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d\"" Jan 17 00:07:34.832986 containerd[1672]: time="2026-01-17T00:07:34.832403334Z" level=info msg="Ensure that sandbox c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d in task-service has been cleanup successfully" Jan 17 00:07:34.837304 kubelet[3141]: I0117 00:07:34.837253 3141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" Jan 17 00:07:34.839293 containerd[1672]: time="2026-01-17T00:07:34.839250183Z" level=info msg="StopPodSandbox for \"192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386\"" Jan 17 00:07:34.839442 containerd[1672]: time="2026-01-17T00:07:34.839421023Z" level=info msg="Ensure that sandbox 192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386 in task-service has been cleanup successfully" Jan 17 00:07:34.848777 kubelet[3141]: I0117 00:07:34.848742 3141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" Jan 17 00:07:34.852239 containerd[1672]: time="2026-01-17T00:07:34.851845079Z" level=info msg="StopPodSandbox for \"11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61\"" Jan 17 00:07:34.854131 containerd[1672]: time="2026-01-17T00:07:34.852839561Z" level=info msg="Ensure that sandbox 11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61 in task-service has been cleanup successfully" Jan 17 00:07:34.856567 kubelet[3141]: I0117 00:07:34.856172 3141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" Jan 17 00:07:34.860757 containerd[1672]: time="2026-01-17T00:07:34.860722651Z" level=info msg="StopPodSandbox for \"07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7\"" Jan 17 00:07:34.860942 containerd[1672]: time="2026-01-17T00:07:34.860915051Z" level=info msg="Ensure that sandbox 07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7 in task-service has been cleanup successfully" Jan 17 00:07:34.865436 kubelet[3141]: I0117 00:07:34.865248 3141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" Jan 17 00:07:34.866771 containerd[1672]: time="2026-01-17T00:07:34.866729459Z" level=info msg="StopPodSandbox for \"94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e\"" Jan 17 00:07:34.869725 kubelet[3141]: I0117 00:07:34.869693 3141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" Jan 17 00:07:34.870390 containerd[1672]: time="2026-01-17T00:07:34.869173662Z" level=info msg="Ensure that sandbox 94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e in task-service has been cleanup successfully" Jan 17 00:07:34.871875 containerd[1672]: time="2026-01-17T00:07:34.871828345Z" level=info msg="StopPodSandbox for \"9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca\"" Jan 17 00:07:34.872010 containerd[1672]: time="2026-01-17T00:07:34.871991385Z" level=info msg="Ensure that sandbox 9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca in task-service has been cleanup successfully" Jan 17 00:07:34.893145 containerd[1672]: time="2026-01-17T00:07:34.892228892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 17 00:07:34.898637 kubelet[3141]: I0117 00:07:34.898135 3141 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" Jan 17 00:07:34.900213 containerd[1672]: time="2026-01-17T00:07:34.900111822Z" level=info msg="StopPodSandbox for \"104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7\"" Jan 17 00:07:34.900683 containerd[1672]: time="2026-01-17T00:07:34.900282302Z" level=info msg="Ensure that sandbox 104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7 in task-service has been cleanup successfully" Jan 17 00:07:34.930778 containerd[1672]: time="2026-01-17T00:07:34.930573662Z" level=error msg="StopPodSandbox for \"01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9\" failed" error="failed to destroy network for sandbox \"01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.931847 kubelet[3141]: E0117 00:07:34.931805 3141 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" Jan 17 00:07:34.933647 kubelet[3141]: E0117 00:07:34.931944 3141 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9"} Jan 17 00:07:34.933647 kubelet[3141]: E0117 00:07:34.932006 3141 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bcc0dcb5-6cc0-4aca-b131-0866d93b8e20\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:07:34.933647 kubelet[3141]: E0117 00:07:34.932030 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bcc0dcb5-6cc0-4aca-b131-0866d93b8e20\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57d7d85589-ght5v" podUID="bcc0dcb5-6cc0-4aca-b131-0866d93b8e20" Jan 17 00:07:34.970566 containerd[1672]: time="2026-01-17T00:07:34.970418354Z" level=error msg="StopPodSandbox for \"11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61\" failed" error="failed to destroy network for sandbox \"11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.970735 kubelet[3141]: E0117 00:07:34.970668 3141 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" Jan 17 00:07:34.970735 kubelet[3141]: E0117 00:07:34.970713 3141 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61"} Jan 17 00:07:34.970817 kubelet[3141]: E0117 00:07:34.970752 3141 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"92047ce3-1e28-4b15-bb95-00e4947b1fab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:07:34.970817 kubelet[3141]: E0117 00:07:34.970777 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"92047ce3-1e28-4b15-bb95-00e4947b1fab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-hg9nz" podUID="92047ce3-1e28-4b15-bb95-00e4947b1fab" Jan 17 00:07:34.972236 containerd[1672]: time="2026-01-17T00:07:34.972148236Z" level=error msg="StopPodSandbox for \"94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e\" failed" error="failed to destroy network for sandbox \"94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.972352 kubelet[3141]: E0117 00:07:34.972314 3141 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" Jan 17 00:07:34.972390 kubelet[3141]: E0117 00:07:34.972350 3141 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e"} Jan 17 00:07:34.972390 kubelet[3141]: E0117 00:07:34.972373 3141 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c75cd337-98e1-4c98-836d-ddd5677f5fcd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:07:34.972591 kubelet[3141]: E0117 00:07:34.972393 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c75cd337-98e1-4c98-836d-ddd5677f5fcd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-xzzqx" podUID="c75cd337-98e1-4c98-836d-ddd5677f5fcd" Jan 17 00:07:34.998137 containerd[1672]: time="2026-01-17T00:07:34.998082430Z" level=error msg="StopPodSandbox for \"07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7\" failed" error="failed to destroy network for sandbox \"07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.998686 containerd[1672]: time="2026-01-17T00:07:34.998593430Z" level=error msg="StopPodSandbox for \"c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d\" failed" error="failed to destroy network for sandbox \"c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:34.998838 kubelet[3141]: E0117 00:07:34.998766 3141 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" Jan 17 00:07:34.998838 kubelet[3141]: E0117 00:07:34.998820 3141 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7"} Jan 17 00:07:34.999206 kubelet[3141]: E0117 00:07:34.998853 3141 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"48afbc43-cbe2-4a92-9c9c-ba067e96302f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:07:34.999206 kubelet[3141]: E0117 00:07:34.998877 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"48afbc43-cbe2-4a92-9c9c-ba067e96302f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-bcb9d7d84-lvzcd" podUID="48afbc43-cbe2-4a92-9c9c-ba067e96302f" Jan 17 00:07:34.999206 kubelet[3141]: E0117 00:07:34.999005 3141 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" Jan 17 00:07:34.999206 kubelet[3141]: E0117 00:07:34.999041 3141 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d"} Jan 17 00:07:34.999377 kubelet[3141]: E0117 00:07:34.999076 3141 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b1f66b76-7db3-449d-92fa-faa5ceccc08b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:07:34.999377 kubelet[3141]: E0117 00:07:34.999099 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b1f66b76-7db3-449d-92fa-faa5ceccc08b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v4lqg" podUID="b1f66b76-7db3-449d-92fa-faa5ceccc08b" Jan 17 00:07:35.002518 containerd[1672]: time="2026-01-17T00:07:35.002464075Z" level=error msg="StopPodSandbox for \"192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386\" failed" error="failed to destroy network for sandbox \"192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:35.002744 kubelet[3141]: E0117 00:07:35.002711 3141 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" Jan 17 00:07:35.002944 kubelet[3141]: E0117 00:07:35.002845 3141 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386"} Jan 17 00:07:35.002944 kubelet[3141]: E0117 00:07:35.002881 3141 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4d9310f4-1124-495b-a411-5323618ddd1d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:07:35.002944 kubelet[3141]: E0117 00:07:35.002919 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4d9310f4-1124-495b-a411-5323618ddd1d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57d7d85589-mrl7f" podUID="4d9310f4-1124-495b-a411-5323618ddd1d" Jan 17 00:07:35.004802 containerd[1672]: time="2026-01-17T00:07:35.004764958Z" level=error msg="StopPodSandbox for \"104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7\" failed" error="failed to destroy network for sandbox \"104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:35.005093 kubelet[3141]: E0117 00:07:35.004959 3141 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" Jan 17 00:07:35.005093 kubelet[3141]: E0117 00:07:35.004997 3141 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7"} Jan 17 00:07:35.005093 kubelet[3141]: E0117 00:07:35.005022 3141 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0ad856fe-523a-4a16-bb22-1a01d08264e2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:07:35.005093 kubelet[3141]: E0117 00:07:35.005064 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0ad856fe-523a-4a16-bb22-1a01d08264e2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-m66dw" podUID="0ad856fe-523a-4a16-bb22-1a01d08264e2" Jan 17 00:07:35.012158 containerd[1672]: time="2026-01-17T00:07:35.012108728Z" level=error msg="StopPodSandbox for \"9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca\" failed" error="failed to destroy network for sandbox \"9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:07:35.012397 kubelet[3141]: E0117 00:07:35.012359 3141 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" Jan 17 00:07:35.012454 kubelet[3141]: E0117 00:07:35.012406 3141 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca"} Jan 17 00:07:35.012454 kubelet[3141]: E0117 00:07:35.012437 3141 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f59d9319-e335-4bfc-a026-d8bbe3696e81\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:07:35.012581 kubelet[3141]: E0117 00:07:35.012460 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f59d9319-e335-4bfc-a026-d8bbe3696e81\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-894f9f8d4-b5lgh" podUID="f59d9319-e335-4bfc-a026-d8bbe3696e81" Jan 17 00:07:39.306244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4057407865.mount: Deactivated successfully. Jan 17 00:07:39.986661 containerd[1672]: time="2026-01-17T00:07:39.985903117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:07:39.988332 containerd[1672]: time="2026-01-17T00:07:39.988300280Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 17 00:07:40.029886 containerd[1672]: time="2026-01-17T00:07:40.029631334Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:07:40.170584 containerd[1672]: time="2026-01-17T00:07:40.170518037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:07:40.171378 containerd[1672]: time="2026-01-17T00:07:40.171338599Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 5.279065547s" Jan 17 00:07:40.171378 containerd[1672]: time="2026-01-17T00:07:40.171378999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 17 00:07:40.473983 containerd[1672]: time="2026-01-17T00:07:40.473898112Z" level=info msg="CreateContainer within sandbox \"68ac0f1098267186551bcaa98c148b8d27dcca33f43c7e32d072fdb9e3761bbd\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 00:07:40.851198 containerd[1672]: time="2026-01-17T00:07:40.851023442Z" level=info msg="CreateContainer within sandbox \"68ac0f1098267186551bcaa98c148b8d27dcca33f43c7e32d072fdb9e3761bbd\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"41cb70fd9073b2c4022a3447a422909b6dc393595146637206226853efd02d9a\"" Jan 17 00:07:40.852908 containerd[1672]: time="2026-01-17T00:07:40.852826765Z" level=info msg="StartContainer for \"41cb70fd9073b2c4022a3447a422909b6dc393595146637206226853efd02d9a\"" Jan 17 00:07:40.894220 systemd[1]: Started cri-containerd-41cb70fd9073b2c4022a3447a422909b6dc393595146637206226853efd02d9a.scope - libcontainer container 41cb70fd9073b2c4022a3447a422909b6dc393595146637206226853efd02d9a. Jan 17 00:07:40.931230 containerd[1672]: time="2026-01-17T00:07:40.931180627Z" level=info msg="StartContainer for \"41cb70fd9073b2c4022a3447a422909b6dc393595146637206226853efd02d9a\" returns successfully" Jan 17 00:07:40.955429 waagent[1847]: 2026-01-17T00:07:40.955356Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 17 00:07:40.967852 waagent[1847]: 2026-01-17T00:07:40.966945Z INFO ExtHandler Jan 17 00:07:40.967852 waagent[1847]: 2026-01-17T00:07:40.967084Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 17 00:07:40.972086 waagent[1847]: 2026-01-17T00:07:40.971461Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 17 00:07:41.110547 waagent[1847]: 2026-01-17T00:07:41.110389Z INFO ExtHandler Downloaded certificate {'thumbprint': 'B821667CB418628ADC68614E85647EE9CA4B457A', 'hasPrivateKey': True} Jan 17 00:07:41.111537 waagent[1847]: 2026-01-17T00:07:41.111463Z INFO ExtHandler Fetch goal state completed Jan 17 00:07:41.111929 waagent[1847]: 2026-01-17T00:07:41.111887Z INFO ExtHandler ExtHandler Jan 17 00:07:41.112001 waagent[1847]: 2026-01-17T00:07:41.111969Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 372469f7-0b54-44ac-b6bf-778ca951b615 correlation 15bb6efc-89bc-45e3-b101-844a2aa6df9d created: 2026-01-17T00:07:35.988068Z] Jan 17 00:07:41.112344 waagent[1847]: 2026-01-17T00:07:41.112304Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 17 00:07:41.112909 waagent[1847]: 2026-01-17T00:07:41.112867Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Jan 17 00:07:41.326205 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 00:07:41.326335 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 00:07:41.518584 containerd[1672]: time="2026-01-17T00:07:41.516713387Z" level=info msg="StopPodSandbox for \"07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7\"" Jan 17 00:07:41.720674 containerd[1672]: 2026-01-17 00:07:41.671 [INFO][4373] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" Jan 17 00:07:41.720674 containerd[1672]: 2026-01-17 00:07:41.672 [INFO][4373] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" iface="eth0" netns="/var/run/netns/cni-eb896e54-93b7-9153-7a58-11365e30ef11" Jan 17 00:07:41.720674 containerd[1672]: 2026-01-17 00:07:41.672 [INFO][4373] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" iface="eth0" netns="/var/run/netns/cni-eb896e54-93b7-9153-7a58-11365e30ef11" Jan 17 00:07:41.720674 containerd[1672]: 2026-01-17 00:07:41.675 [INFO][4373] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" iface="eth0" netns="/var/run/netns/cni-eb896e54-93b7-9153-7a58-11365e30ef11" Jan 17 00:07:41.720674 containerd[1672]: 2026-01-17 00:07:41.675 [INFO][4373] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" Jan 17 00:07:41.720674 containerd[1672]: 2026-01-17 00:07:41.675 [INFO][4373] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" Jan 17 00:07:41.720674 containerd[1672]: 2026-01-17 00:07:41.698 [INFO][4380] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" HandleID="k8s-pod-network.07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-whisker--bcb9d7d84--lvzcd-eth0" Jan 17 00:07:41.720674 containerd[1672]: 2026-01-17 00:07:41.698 [INFO][4380] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:07:41.720674 containerd[1672]: 2026-01-17 00:07:41.699 [INFO][4380] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:07:41.720674 containerd[1672]: 2026-01-17 00:07:41.713 [WARNING][4380] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" HandleID="k8s-pod-network.07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-whisker--bcb9d7d84--lvzcd-eth0" Jan 17 00:07:41.720674 containerd[1672]: 2026-01-17 00:07:41.714 [INFO][4380] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" HandleID="k8s-pod-network.07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-whisker--bcb9d7d84--lvzcd-eth0" Jan 17 00:07:41.720674 containerd[1672]: 2026-01-17 00:07:41.715 [INFO][4380] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:07:41.720674 containerd[1672]: 2026-01-17 00:07:41.718 [INFO][4373] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" Jan 17 00:07:41.724276 containerd[1672]: time="2026-01-17T00:07:41.724122576Z" level=info msg="TearDown network for sandbox \"07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7\" successfully" Jan 17 00:07:41.724276 containerd[1672]: time="2026-01-17T00:07:41.724159376Z" level=info msg="StopPodSandbox for \"07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7\" returns successfully" Jan 17 00:07:41.725496 systemd[1]: run-netns-cni\x2deb896e54\x2d93b7\x2d9153\x2d7a58\x2d11365e30ef11.mount: Deactivated successfully. Jan 17 00:07:41.843147 kubelet[3141]: I0117 00:07:41.842804 3141 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48afbc43-cbe2-4a92-9c9c-ba067e96302f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "48afbc43-cbe2-4a92-9c9c-ba067e96302f" (UID: "48afbc43-cbe2-4a92-9c9c-ba067e96302f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:07:41.844310 kubelet[3141]: I0117 00:07:41.844173 3141 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48afbc43-cbe2-4a92-9c9c-ba067e96302f-whisker-ca-bundle\") pod \"48afbc43-cbe2-4a92-9c9c-ba067e96302f\" (UID: \"48afbc43-cbe2-4a92-9c9c-ba067e96302f\") " Jan 17 00:07:41.844732 kubelet[3141]: I0117 00:07:41.844416 3141 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/48afbc43-cbe2-4a92-9c9c-ba067e96302f-whisker-backend-key-pair\") pod \"48afbc43-cbe2-4a92-9c9c-ba067e96302f\" (UID: \"48afbc43-cbe2-4a92-9c9c-ba067e96302f\") " Jan 17 00:07:41.844816 kubelet[3141]: I0117 00:07:41.844801 3141 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4s2h\" (UniqueName: \"kubernetes.io/projected/48afbc43-cbe2-4a92-9c9c-ba067e96302f-kube-api-access-m4s2h\") pod \"48afbc43-cbe2-4a92-9c9c-ba067e96302f\" (UID: \"48afbc43-cbe2-4a92-9c9c-ba067e96302f\") " Jan 17 00:07:41.844969 kubelet[3141]: I0117 00:07:41.844955 3141 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48afbc43-cbe2-4a92-9c9c-ba067e96302f-whisker-ca-bundle\") on node \"ci-4081.3.6-n-4c16a83c6c\" DevicePath \"\"" Jan 17 00:07:41.849608 kubelet[3141]: I0117 00:07:41.849572 3141 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48afbc43-cbe2-4a92-9c9c-ba067e96302f-kube-api-access-m4s2h" (OuterVolumeSpecName: "kube-api-access-m4s2h") pod "48afbc43-cbe2-4a92-9c9c-ba067e96302f" (UID: "48afbc43-cbe2-4a92-9c9c-ba067e96302f"). InnerVolumeSpecName "kube-api-access-m4s2h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:07:41.850299 kubelet[3141]: I0117 00:07:41.850232 3141 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48afbc43-cbe2-4a92-9c9c-ba067e96302f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "48afbc43-cbe2-4a92-9c9c-ba067e96302f" (UID: "48afbc43-cbe2-4a92-9c9c-ba067e96302f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:07:41.851942 systemd[1]: var-lib-kubelet-pods-48afbc43\x2dcbe2\x2d4a92\x2d9c9c\x2dba067e96302f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm4s2h.mount: Deactivated successfully. Jan 17 00:07:41.853757 systemd[1]: var-lib-kubelet-pods-48afbc43\x2dcbe2\x2d4a92\x2d9c9c\x2dba067e96302f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 17 00:07:41.934784 systemd[1]: Removed slice kubepods-besteffort-pod48afbc43_cbe2_4a92_9c9c_ba067e96302f.slice - libcontainer container kubepods-besteffort-pod48afbc43_cbe2_4a92_9c9c_ba067e96302f.slice. Jan 17 00:07:41.946271 kubelet[3141]: I0117 00:07:41.946199 3141 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/48afbc43-cbe2-4a92-9c9c-ba067e96302f-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-4c16a83c6c\" DevicePath \"\"" Jan 17 00:07:41.946271 kubelet[3141]: I0117 00:07:41.946236 3141 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m4s2h\" (UniqueName: \"kubernetes.io/projected/48afbc43-cbe2-4a92-9c9c-ba067e96302f-kube-api-access-m4s2h\") on node \"ci-4081.3.6-n-4c16a83c6c\" DevicePath \"\"" Jan 17 00:07:41.972386 kubelet[3141]: I0117 00:07:41.971179 3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zfhcx" podStartSLOduration=3.377120807 podStartE2EDuration="30.971154497s" podCreationTimestamp="2026-01-17 00:07:11 +0000 UTC" firstStartedPulling="2026-01-17 00:07:12.578896791 +0000 UTC m=+25.986751312" lastFinishedPulling="2026-01-17 00:07:40.172930481 +0000 UTC m=+53.580785002" observedRunningTime="2026-01-17 00:07:41.954635956 +0000 UTC m=+55.362490477" watchObservedRunningTime="2026-01-17 00:07:41.971154497 +0000 UTC m=+55.379009018" Jan 17 00:07:42.040223 systemd[1]: Created slice kubepods-besteffort-podf0b7656b_346b_4c7a_84f5_6afacf5c8b98.slice - libcontainer container kubepods-besteffort-podf0b7656b_346b_4c7a_84f5_6afacf5c8b98.slice. Jan 17 00:07:42.131463 systemd[1]: run-containerd-runc-k8s.io-41cb70fd9073b2c4022a3447a422909b6dc393595146637206226853efd02d9a-runc.czVMQz.mount: Deactivated successfully. Jan 17 00:07:42.148657 kubelet[3141]: I0117 00:07:42.148129 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f0b7656b-346b-4c7a-84f5-6afacf5c8b98-whisker-backend-key-pair\") pod \"whisker-fdd85dd66-d8cmt\" (UID: \"f0b7656b-346b-4c7a-84f5-6afacf5c8b98\") " pod="calico-system/whisker-fdd85dd66-d8cmt" Jan 17 00:07:42.148657 kubelet[3141]: I0117 00:07:42.148172 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0b7656b-346b-4c7a-84f5-6afacf5c8b98-whisker-ca-bundle\") pod \"whisker-fdd85dd66-d8cmt\" (UID: \"f0b7656b-346b-4c7a-84f5-6afacf5c8b98\") " pod="calico-system/whisker-fdd85dd66-d8cmt" Jan 17 00:07:42.148657 kubelet[3141]: I0117 00:07:42.148191 3141 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgm4n\" (UniqueName: \"kubernetes.io/projected/f0b7656b-346b-4c7a-84f5-6afacf5c8b98-kube-api-access-pgm4n\") pod \"whisker-fdd85dd66-d8cmt\" (UID: \"f0b7656b-346b-4c7a-84f5-6afacf5c8b98\") " pod="calico-system/whisker-fdd85dd66-d8cmt" Jan 17 00:07:42.350754 containerd[1672]: time="2026-01-17T00:07:42.350373389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fdd85dd66-d8cmt,Uid:f0b7656b-346b-4c7a-84f5-6afacf5c8b98,Namespace:calico-system,Attempt:0,}" Jan 17 00:07:42.679272 kubelet[3141]: I0117 00:07:42.679231 3141 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48afbc43-cbe2-4a92-9c9c-ba067e96302f" path="/var/lib/kubelet/pods/48afbc43-cbe2-4a92-9c9c-ba067e96302f/volumes" Jan 17 00:07:42.834187 systemd-networkd[1306]: calie236cb2c0d2: Link UP Jan 17 00:07:42.835161 systemd-networkd[1306]: calie236cb2c0d2: Gained carrier Jan 17 00:07:42.861687 containerd[1672]: 2026-01-17 00:07:42.698 [INFO][4446] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 00:07:42.861687 containerd[1672]: 2026-01-17 00:07:42.717 [INFO][4446] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--4c16a83c6c-k8s-whisker--fdd85dd66--d8cmt-eth0 whisker-fdd85dd66- calico-system f0b7656b-346b-4c7a-84f5-6afacf5c8b98 919 0 2026-01-17 00:07:42 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:fdd85dd66 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-4c16a83c6c whisker-fdd85dd66-d8cmt eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie236cb2c0d2 [] [] }} ContainerID="5edb9b104d790d6f13531a08aae6208d550069ac2f3c8605889a449373a1af16" Namespace="calico-system" Pod="whisker-fdd85dd66-d8cmt" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-whisker--fdd85dd66--d8cmt-" Jan 17 00:07:42.861687 containerd[1672]: 2026-01-17 00:07:42.717 [INFO][4446] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5edb9b104d790d6f13531a08aae6208d550069ac2f3c8605889a449373a1af16" Namespace="calico-system" Pod="whisker-fdd85dd66-d8cmt" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-whisker--fdd85dd66--d8cmt-eth0" Jan 17 00:07:42.861687 containerd[1672]: 2026-01-17 00:07:42.744 [INFO][4458] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5edb9b104d790d6f13531a08aae6208d550069ac2f3c8605889a449373a1af16" HandleID="k8s-pod-network.5edb9b104d790d6f13531a08aae6208d550069ac2f3c8605889a449373a1af16" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-whisker--fdd85dd66--d8cmt-eth0" Jan 17 00:07:42.861687 containerd[1672]: 2026-01-17 00:07:42.744 [INFO][4458] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5edb9b104d790d6f13531a08aae6208d550069ac2f3c8605889a449373a1af16" HandleID="k8s-pod-network.5edb9b104d790d6f13531a08aae6208d550069ac2f3c8605889a449373a1af16" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-whisker--fdd85dd66--d8cmt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b0d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-4c16a83c6c", "pod":"whisker-fdd85dd66-d8cmt", "timestamp":"2026-01-17 00:07:42.744170221 +0000 UTC"}, Hostname:"ci-4081.3.6-n-4c16a83c6c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:07:42.861687 containerd[1672]: 2026-01-17 00:07:42.744 [INFO][4458] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:07:42.861687 containerd[1672]: 2026-01-17 00:07:42.744 [INFO][4458] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:07:42.861687 containerd[1672]: 2026-01-17 00:07:42.744 [INFO][4458] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-4c16a83c6c' Jan 17 00:07:42.861687 containerd[1672]: 2026-01-17 00:07:42.757 [INFO][4458] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5edb9b104d790d6f13531a08aae6208d550069ac2f3c8605889a449373a1af16" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:42.861687 containerd[1672]: 2026-01-17 00:07:42.761 [INFO][4458] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:42.861687 containerd[1672]: 2026-01-17 00:07:42.764 [INFO][4458] ipam/ipam.go 511: Trying affinity for 192.168.12.64/26 host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:42.861687 containerd[1672]: 2026-01-17 00:07:42.766 [INFO][4458] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.64/26 host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:42.861687 containerd[1672]: 2026-01-17 00:07:42.768 [INFO][4458] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.64/26 host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:42.861687 containerd[1672]: 2026-01-17 00:07:42.768 [INFO][4458] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.64/26 handle="k8s-pod-network.5edb9b104d790d6f13531a08aae6208d550069ac2f3c8605889a449373a1af16" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:42.861687 containerd[1672]: 2026-01-17 00:07:42.770 [INFO][4458] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5edb9b104d790d6f13531a08aae6208d550069ac2f3c8605889a449373a1af16 Jan 17 00:07:42.861687 containerd[1672]: 2026-01-17 00:07:42.775 [INFO][4458] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.64/26 handle="k8s-pod-network.5edb9b104d790d6f13531a08aae6208d550069ac2f3c8605889a449373a1af16" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:42.861687 containerd[1672]: 2026-01-17 00:07:42.785 [INFO][4458] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.65/26] block=192.168.12.64/26 handle="k8s-pod-network.5edb9b104d790d6f13531a08aae6208d550069ac2f3c8605889a449373a1af16" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:42.861687 containerd[1672]: 2026-01-17 00:07:42.786 [INFO][4458] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.65/26] handle="k8s-pod-network.5edb9b104d790d6f13531a08aae6208d550069ac2f3c8605889a449373a1af16" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:42.861687 containerd[1672]: 2026-01-17 00:07:42.786 [INFO][4458] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:07:42.861687 containerd[1672]: 2026-01-17 00:07:42.786 [INFO][4458] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.65/26] IPv6=[] ContainerID="5edb9b104d790d6f13531a08aae6208d550069ac2f3c8605889a449373a1af16" HandleID="k8s-pod-network.5edb9b104d790d6f13531a08aae6208d550069ac2f3c8605889a449373a1af16" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-whisker--fdd85dd66--d8cmt-eth0" Jan 17 00:07:42.864660 containerd[1672]: 2026-01-17 00:07:42.788 [INFO][4446] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5edb9b104d790d6f13531a08aae6208d550069ac2f3c8605889a449373a1af16" Namespace="calico-system" Pod="whisker-fdd85dd66-d8cmt" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-whisker--fdd85dd66--d8cmt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4c16a83c6c-k8s-whisker--fdd85dd66--d8cmt-eth0", GenerateName:"whisker-fdd85dd66-", Namespace:"calico-system", SelfLink:"", UID:"f0b7656b-346b-4c7a-84f5-6afacf5c8b98", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 7, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"fdd85dd66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4c16a83c6c", ContainerID:"", Pod:"whisker-fdd85dd66-d8cmt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.12.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie236cb2c0d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:07:42.864660 containerd[1672]: 2026-01-17 00:07:42.788 [INFO][4446] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.65/32] ContainerID="5edb9b104d790d6f13531a08aae6208d550069ac2f3c8605889a449373a1af16" Namespace="calico-system" Pod="whisker-fdd85dd66-d8cmt" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-whisker--fdd85dd66--d8cmt-eth0" Jan 17 00:07:42.864660 containerd[1672]: 2026-01-17 00:07:42.788 [INFO][4446] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie236cb2c0d2 ContainerID="5edb9b104d790d6f13531a08aae6208d550069ac2f3c8605889a449373a1af16" Namespace="calico-system" Pod="whisker-fdd85dd66-d8cmt" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-whisker--fdd85dd66--d8cmt-eth0" Jan 17 00:07:42.864660 containerd[1672]: 2026-01-17 00:07:42.835 [INFO][4446] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5edb9b104d790d6f13531a08aae6208d550069ac2f3c8605889a449373a1af16" Namespace="calico-system" Pod="whisker-fdd85dd66-d8cmt" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-whisker--fdd85dd66--d8cmt-eth0" Jan 17 00:07:42.864660 containerd[1672]: 2026-01-17 00:07:42.836 [INFO][4446] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5edb9b104d790d6f13531a08aae6208d550069ac2f3c8605889a449373a1af16" Namespace="calico-system" Pod="whisker-fdd85dd66-d8cmt" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-whisker--fdd85dd66--d8cmt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4c16a83c6c-k8s-whisker--fdd85dd66--d8cmt-eth0", GenerateName:"whisker-fdd85dd66-", Namespace:"calico-system", SelfLink:"", UID:"f0b7656b-346b-4c7a-84f5-6afacf5c8b98", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 7, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"fdd85dd66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4c16a83c6c", ContainerID:"5edb9b104d790d6f13531a08aae6208d550069ac2f3c8605889a449373a1af16", Pod:"whisker-fdd85dd66-d8cmt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.12.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie236cb2c0d2", MAC:"0e:df:28:a0:80:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:07:42.864660 containerd[1672]: 2026-01-17 00:07:42.857 [INFO][4446] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5edb9b104d790d6f13531a08aae6208d550069ac2f3c8605889a449373a1af16" Namespace="calico-system" Pod="whisker-fdd85dd66-d8cmt" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-whisker--fdd85dd66--d8cmt-eth0" Jan 17 00:07:43.683902 containerd[1672]: time="2026-01-17T00:07:43.683628921Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:07:43.683902 containerd[1672]: time="2026-01-17T00:07:43.683681081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:07:43.683902 containerd[1672]: time="2026-01-17T00:07:43.683691601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:07:43.683902 containerd[1672]: time="2026-01-17T00:07:43.683780641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:07:43.858232 systemd[1]: Started cri-containerd-5edb9b104d790d6f13531a08aae6208d550069ac2f3c8605889a449373a1af16.scope - libcontainer container 5edb9b104d790d6f13531a08aae6208d550069ac2f3c8605889a449373a1af16. Jan 17 00:07:43.916586 containerd[1672]: time="2026-01-17T00:07:43.916534943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fdd85dd66-d8cmt,Uid:f0b7656b-346b-4c7a-84f5-6afacf5c8b98,Namespace:calico-system,Attempt:0,} returns sandbox id \"5edb9b104d790d6f13531a08aae6208d550069ac2f3c8605889a449373a1af16\"" Jan 17 00:07:43.919908 containerd[1672]: time="2026-01-17T00:07:43.919874267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:07:43.928130 kernel: bpftool[4655]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 00:07:44.243261 systemd-networkd[1306]: calie236cb2c0d2: Gained IPv6LL Jan 17 00:07:45.097361 systemd-networkd[1306]: vxlan.calico: Link UP Jan 17 00:07:45.097369 systemd-networkd[1306]: vxlan.calico: Gained carrier Jan 17 00:07:45.383328 containerd[1672]: time="2026-01-17T00:07:45.382943207Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:07:45.626968 containerd[1672]: time="2026-01-17T00:07:45.626889484Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:07:45.674684 containerd[1672]: time="2026-01-17T00:07:45.627037684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:07:45.674684 containerd[1672]: time="2026-01-17T00:07:45.628443446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:07:45.674800 kubelet[3141]: E0117 00:07:45.627272 3141 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:07:45.674800 kubelet[3141]: E0117 00:07:45.627315 3141 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:07:45.674800 kubelet[3141]: E0117 00:07:45.627403 3141 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-fdd85dd66-d8cmt_calico-system(f0b7656b-346b-4c7a-84f5-6afacf5c8b98): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:07:46.273327 containerd[1672]: time="2026-01-17T00:07:46.273251763Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:07:46.469576 containerd[1672]: time="2026-01-17T00:07:46.469514578Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:07:46.470235 containerd[1672]: time="2026-01-17T00:07:46.469652818Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:07:46.470282 kubelet[3141]: E0117 00:07:46.470095 3141 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:07:46.470282 kubelet[3141]: E0117 00:07:46.470146 3141 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:07:46.470282 kubelet[3141]: E0117 00:07:46.470225 3141 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-fdd85dd66-d8cmt_calico-system(f0b7656b-346b-4c7a-84f5-6afacf5c8b98): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:07:46.470408 kubelet[3141]: E0117 00:07:46.470268 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-fdd85dd66-d8cmt" podUID="f0b7656b-346b-4c7a-84f5-6afacf5c8b98" Jan 17 00:07:46.669081 containerd[1672]: time="2026-01-17T00:07:46.668476037Z" level=info msg="StopPodSandbox for \"07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7\"" Jan 17 00:07:46.680772 containerd[1672]: time="2026-01-17T00:07:46.680345612Z" level=info msg="StopPodSandbox for \"192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386\"" Jan 17 00:07:46.681196 containerd[1672]: time="2026-01-17T00:07:46.680986493Z" level=info msg="StopPodSandbox for \"01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9\"" Jan 17 00:07:46.694821 containerd[1672]: time="2026-01-17T00:07:46.694739231Z" level=info msg="StopPodSandbox for \"c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d\"" Jan 17 00:07:46.859138 containerd[1672]: 2026-01-17 00:07:46.771 [WARNING][4737] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-whisker--bcb9d7d84--lvzcd-eth0" Jan 17 00:07:46.859138 containerd[1672]: 2026-01-17 00:07:46.771 [INFO][4737] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" Jan 17 00:07:46.859138 containerd[1672]: 2026-01-17 00:07:46.771 [INFO][4737] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" iface="eth0" netns="" Jan 17 00:07:46.859138 containerd[1672]: 2026-01-17 00:07:46.771 [INFO][4737] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" Jan 17 00:07:46.859138 containerd[1672]: 2026-01-17 00:07:46.771 [INFO][4737] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" Jan 17 00:07:46.859138 containerd[1672]: 2026-01-17 00:07:46.811 [INFO][4793] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" HandleID="k8s-pod-network.07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-whisker--bcb9d7d84--lvzcd-eth0" Jan 17 00:07:46.859138 containerd[1672]: 2026-01-17 00:07:46.819 [INFO][4793] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:07:46.859138 containerd[1672]: 2026-01-17 00:07:46.819 [INFO][4793] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:07:46.859138 containerd[1672]: 2026-01-17 00:07:46.843 [WARNING][4793] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" HandleID="k8s-pod-network.07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-whisker--bcb9d7d84--lvzcd-eth0" Jan 17 00:07:46.859138 containerd[1672]: 2026-01-17 00:07:46.843 [INFO][4793] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" HandleID="k8s-pod-network.07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-whisker--bcb9d7d84--lvzcd-eth0" Jan 17 00:07:46.859138 containerd[1672]: 2026-01-17 00:07:46.846 [INFO][4793] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:07:46.859138 containerd[1672]: 2026-01-17 00:07:46.852 [INFO][4737] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" Jan 17 00:07:46.859766 containerd[1672]: time="2026-01-17T00:07:46.859455205Z" level=info msg="TearDown network for sandbox \"07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7\" successfully" Jan 17 00:07:46.859941 containerd[1672]: time="2026-01-17T00:07:46.859923965Z" level=info msg="StopPodSandbox for \"07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7\" returns successfully" Jan 17 00:07:46.860707 containerd[1672]: time="2026-01-17T00:07:46.860684246Z" level=info msg="RemovePodSandbox for \"07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7\"" Jan 17 00:07:46.869283 containerd[1672]: time="2026-01-17T00:07:46.869224777Z" level=info msg="Forcibly stopping sandbox \"07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7\"" Jan 17 00:07:46.891857 containerd[1672]: 2026-01-17 00:07:46.820 [INFO][4766] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" Jan 17 00:07:46.891857 containerd[1672]: 2026-01-17 00:07:46.820 [INFO][4766] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" iface="eth0" netns="/var/run/netns/cni-b6993d63-8abb-0be0-6bbd-d20d7a2d1050" Jan 17 00:07:46.891857 containerd[1672]: 2026-01-17 00:07:46.821 [INFO][4766] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" iface="eth0" netns="/var/run/netns/cni-b6993d63-8abb-0be0-6bbd-d20d7a2d1050" Jan 17 00:07:46.891857 containerd[1672]: 2026-01-17 00:07:46.821 [INFO][4766] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" iface="eth0" netns="/var/run/netns/cni-b6993d63-8abb-0be0-6bbd-d20d7a2d1050" Jan 17 00:07:46.891857 containerd[1672]: 2026-01-17 00:07:46.821 [INFO][4766] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" Jan 17 00:07:46.891857 containerd[1672]: 2026-01-17 00:07:46.821 [INFO][4766] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" Jan 17 00:07:46.891857 containerd[1672]: 2026-01-17 00:07:46.865 [INFO][4803] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" HandleID="k8s-pod-network.192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--mrl7f-eth0" Jan 17 00:07:46.891857 containerd[1672]: 2026-01-17 00:07:46.865 [INFO][4803] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:07:46.891857 containerd[1672]: 2026-01-17 00:07:46.865 [INFO][4803] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:07:46.891857 containerd[1672]: 2026-01-17 00:07:46.879 [WARNING][4803] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" HandleID="k8s-pod-network.192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--mrl7f-eth0" Jan 17 00:07:46.891857 containerd[1672]: 2026-01-17 00:07:46.879 [INFO][4803] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" HandleID="k8s-pod-network.192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--mrl7f-eth0" Jan 17 00:07:46.891857 containerd[1672]: 2026-01-17 00:07:46.883 [INFO][4803] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:07:46.891857 containerd[1672]: 2026-01-17 00:07:46.888 [INFO][4766] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" Jan 17 00:07:46.896403 containerd[1672]: time="2026-01-17T00:07:46.892077727Z" level=info msg="TearDown network for sandbox \"192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386\" successfully" Jan 17 00:07:46.896403 containerd[1672]: time="2026-01-17T00:07:46.892105087Z" level=info msg="StopPodSandbox for \"192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386\" returns successfully" Jan 17 00:07:46.898688 systemd[1]: run-netns-cni\x2db6993d63\x2d8abb\x2d0be0\x2d6bbd\x2dd20d7a2d1050.mount: Deactivated successfully. Jan 17 00:07:46.904304 containerd[1672]: time="2026-01-17T00:07:46.904165783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d7d85589-mrl7f,Uid:4d9310f4-1124-495b-a411-5323618ddd1d,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:07:46.923741 containerd[1672]: 2026-01-17 00:07:46.809 [INFO][4774] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" Jan 17 00:07:46.923741 containerd[1672]: 2026-01-17 00:07:46.814 [INFO][4774] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" iface="eth0" netns="/var/run/netns/cni-d98e6321-afe1-adf1-2b31-2e389975d811" Jan 17 00:07:46.923741 containerd[1672]: 2026-01-17 00:07:46.816 [INFO][4774] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" iface="eth0" netns="/var/run/netns/cni-d98e6321-afe1-adf1-2b31-2e389975d811" Jan 17 00:07:46.923741 containerd[1672]: 2026-01-17 00:07:46.818 [INFO][4774] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" iface="eth0" netns="/var/run/netns/cni-d98e6321-afe1-adf1-2b31-2e389975d811" Jan 17 00:07:46.923741 containerd[1672]: 2026-01-17 00:07:46.819 [INFO][4774] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" Jan 17 00:07:46.923741 containerd[1672]: 2026-01-17 00:07:46.819 [INFO][4774] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" Jan 17 00:07:46.923741 containerd[1672]: 2026-01-17 00:07:46.894 [INFO][4801] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" HandleID="k8s-pod-network.01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--ght5v-eth0" Jan 17 00:07:46.923741 containerd[1672]: 2026-01-17 00:07:46.894 [INFO][4801] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:07:46.923741 containerd[1672]: 2026-01-17 00:07:46.894 [INFO][4801] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:07:46.923741 containerd[1672]: 2026-01-17 00:07:46.911 [WARNING][4801] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" HandleID="k8s-pod-network.01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--ght5v-eth0" Jan 17 00:07:46.923741 containerd[1672]: 2026-01-17 00:07:46.911 [INFO][4801] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" HandleID="k8s-pod-network.01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--ght5v-eth0" Jan 17 00:07:46.923741 containerd[1672]: 2026-01-17 00:07:46.913 [INFO][4801] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:07:46.923741 containerd[1672]: 2026-01-17 00:07:46.919 [INFO][4774] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" Jan 17 00:07:46.926562 containerd[1672]: time="2026-01-17T00:07:46.926148771Z" level=info msg="TearDown network for sandbox \"01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9\" successfully" Jan 17 00:07:46.926562 containerd[1672]: time="2026-01-17T00:07:46.926191451Z" level=info msg="StopPodSandbox for \"01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9\" returns successfully" Jan 17 00:07:46.929014 systemd[1]: run-netns-cni\x2dd98e6321\x2dafe1\x2dadf1\x2d2b31\x2d2e389975d811.mount: Deactivated successfully. Jan 17 00:07:46.931749 systemd-networkd[1306]: vxlan.calico: Gained IPv6LL Jan 17 00:07:46.934218 containerd[1672]: time="2026-01-17T00:07:46.933897461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d7d85589-ght5v,Uid:bcc0dcb5-6cc0-4aca-b131-0866d93b8e20,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:07:46.949569 kubelet[3141]: E0117 00:07:46.949370 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-fdd85dd66-d8cmt" podUID="f0b7656b-346b-4c7a-84f5-6afacf5c8b98" Jan 17 00:07:46.957274 containerd[1672]: 2026-01-17 00:07:46.818 [INFO][4775] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" Jan 17 00:07:46.957274 containerd[1672]: 2026-01-17 00:07:46.819 [INFO][4775] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" iface="eth0" netns="/var/run/netns/cni-bd804bc2-7a2e-788c-a8a3-01c8e639b004" Jan 17 00:07:46.957274 containerd[1672]: 2026-01-17 00:07:46.822 [INFO][4775] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" iface="eth0" netns="/var/run/netns/cni-bd804bc2-7a2e-788c-a8a3-01c8e639b004" Jan 17 00:07:46.957274 containerd[1672]: 2026-01-17 00:07:46.825 [INFO][4775] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" iface="eth0" netns="/var/run/netns/cni-bd804bc2-7a2e-788c-a8a3-01c8e639b004" Jan 17 00:07:46.957274 containerd[1672]: 2026-01-17 00:07:46.825 [INFO][4775] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" Jan 17 00:07:46.957274 containerd[1672]: 2026-01-17 00:07:46.825 [INFO][4775] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" Jan 17 00:07:46.957274 containerd[1672]: 2026-01-17 00:07:46.896 [INFO][4808] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" HandleID="k8s-pod-network.c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-csi--node--driver--v4lqg-eth0" Jan 17 00:07:46.957274 containerd[1672]: 2026-01-17 00:07:46.896 [INFO][4808] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:07:46.957274 containerd[1672]: 2026-01-17 00:07:46.913 [INFO][4808] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:07:46.957274 containerd[1672]: 2026-01-17 00:07:46.939 [WARNING][4808] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" HandleID="k8s-pod-network.c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-csi--node--driver--v4lqg-eth0" Jan 17 00:07:46.957274 containerd[1672]: 2026-01-17 00:07:46.941 [INFO][4808] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" HandleID="k8s-pod-network.c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-csi--node--driver--v4lqg-eth0" Jan 17 00:07:46.957274 containerd[1672]: 2026-01-17 00:07:46.944 [INFO][4808] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:07:46.957274 containerd[1672]: 2026-01-17 00:07:46.950 [INFO][4775] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" Jan 17 00:07:46.961587 containerd[1672]: time="2026-01-17T00:07:46.961095537Z" level=info msg="TearDown network for sandbox \"c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d\" successfully" Jan 17 00:07:46.961587 containerd[1672]: time="2026-01-17T00:07:46.961133097Z" level=info msg="StopPodSandbox for \"c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d\" returns successfully" Jan 17 00:07:46.962725 systemd[1]: run-netns-cni\x2dbd804bc2\x2d7a2e\x2d788c\x2da8a3\x2d01c8e639b004.mount: Deactivated successfully. Jan 17 00:07:46.977812 containerd[1672]: time="2026-01-17T00:07:46.977584478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v4lqg,Uid:b1f66b76-7db3-449d-92fa-faa5ceccc08b,Namespace:calico-system,Attempt:1,}" Jan 17 00:07:47.020426 containerd[1672]: 2026-01-17 00:07:46.981 [WARNING][4826] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-whisker--bcb9d7d84--lvzcd-eth0" Jan 17 00:07:47.020426 containerd[1672]: 2026-01-17 00:07:46.982 [INFO][4826] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" Jan 17 00:07:47.020426 containerd[1672]: 2026-01-17 00:07:46.982 [INFO][4826] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" iface="eth0" netns="" Jan 17 00:07:47.020426 containerd[1672]: 2026-01-17 00:07:46.982 [INFO][4826] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" Jan 17 00:07:47.020426 containerd[1672]: 2026-01-17 00:07:46.982 [INFO][4826] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" Jan 17 00:07:47.020426 containerd[1672]: 2026-01-17 00:07:47.005 [INFO][4839] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" HandleID="k8s-pod-network.07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-whisker--bcb9d7d84--lvzcd-eth0" Jan 17 00:07:47.020426 containerd[1672]: 2026-01-17 00:07:47.005 [INFO][4839] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:07:47.020426 containerd[1672]: 2026-01-17 00:07:47.005 [INFO][4839] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:07:47.020426 containerd[1672]: 2026-01-17 00:07:47.014 [WARNING][4839] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" HandleID="k8s-pod-network.07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-whisker--bcb9d7d84--lvzcd-eth0" Jan 17 00:07:47.020426 containerd[1672]: 2026-01-17 00:07:47.015 [INFO][4839] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" HandleID="k8s-pod-network.07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-whisker--bcb9d7d84--lvzcd-eth0" Jan 17 00:07:47.020426 containerd[1672]: 2026-01-17 00:07:47.016 [INFO][4839] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:07:47.020426 containerd[1672]: 2026-01-17 00:07:47.018 [INFO][4826] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7" Jan 17 00:07:47.021776 containerd[1672]: time="2026-01-17T00:07:47.020849134Z" level=info msg="TearDown network for sandbox \"07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7\" successfully" Jan 17 00:07:47.144580 waagent[1847]: 2026-01-17T00:07:47.144521Z INFO ExtHandler Jan 17 00:07:47.180551 waagent[1847]: 2026-01-17T00:07:47.180403Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 63ba03ff-6bb1-49a0-82a3-fd40e6aa5e33 eTag: 3563564178190816682 source: Fabric] Jan 17 00:07:47.180992 waagent[1847]: 2026-01-17T00:07:47.180927Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 17 00:07:47.433190 containerd[1672]: time="2026-01-17T00:07:47.432969349Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:07:47.433190 containerd[1672]: time="2026-01-17T00:07:47.433064909Z" level=info msg="RemovePodSandbox \"07deeb7644689fff8191ce1b72c76323878db5463733713aa4ac19b3b5c9c1b7\" returns successfully" Jan 17 00:07:47.678077 containerd[1672]: time="2026-01-17T00:07:47.678004027Z" level=info msg="StopPodSandbox for \"94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e\"" Jan 17 00:07:47.679277 containerd[1672]: time="2026-01-17T00:07:47.679200949Z" level=info msg="StopPodSandbox for \"9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca\"" Jan 17 00:07:47.807190 containerd[1672]: 2026-01-17 00:07:47.745 [INFO][4862] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" Jan 17 00:07:47.807190 containerd[1672]: 2026-01-17 00:07:47.748 [INFO][4862] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" iface="eth0" netns="/var/run/netns/cni-3d39c9e0-08a9-4d21-27e0-34809922a0f0" Jan 17 00:07:47.807190 containerd[1672]: 2026-01-17 00:07:47.749 [INFO][4862] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" iface="eth0" netns="/var/run/netns/cni-3d39c9e0-08a9-4d21-27e0-34809922a0f0" Jan 17 00:07:47.807190 containerd[1672]: 2026-01-17 00:07:47.751 [INFO][4862] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" iface="eth0" netns="/var/run/netns/cni-3d39c9e0-08a9-4d21-27e0-34809922a0f0" Jan 17 00:07:47.807190 containerd[1672]: 2026-01-17 00:07:47.751 [INFO][4862] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" Jan 17 00:07:47.807190 containerd[1672]: 2026-01-17 00:07:47.751 [INFO][4862] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" Jan 17 00:07:47.807190 containerd[1672]: 2026-01-17 00:07:47.773 [INFO][4878] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" HandleID="k8s-pod-network.94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--xzzqx-eth0" Jan 17 00:07:47.807190 containerd[1672]: 2026-01-17 00:07:47.773 [INFO][4878] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:07:47.807190 containerd[1672]: 2026-01-17 00:07:47.773 [INFO][4878] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:07:47.807190 containerd[1672]: 2026-01-17 00:07:47.797 [WARNING][4878] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" HandleID="k8s-pod-network.94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--xzzqx-eth0" Jan 17 00:07:47.807190 containerd[1672]: 2026-01-17 00:07:47.797 [INFO][4878] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" HandleID="k8s-pod-network.94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--xzzqx-eth0" Jan 17 00:07:47.807190 containerd[1672]: 2026-01-17 00:07:47.798 [INFO][4878] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:07:47.807190 containerd[1672]: 2026-01-17 00:07:47.804 [INFO][4862] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" Jan 17 00:07:47.808474 containerd[1672]: time="2026-01-17T00:07:47.807314635Z" level=info msg="TearDown network for sandbox \"94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e\" successfully" Jan 17 00:07:47.808474 containerd[1672]: time="2026-01-17T00:07:47.807342235Z" level=info msg="StopPodSandbox for \"94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e\" returns successfully" Jan 17 00:07:47.820198 containerd[1672]: 2026-01-17 00:07:47.752 [INFO][4869] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" Jan 17 00:07:47.820198 containerd[1672]: 2026-01-17 00:07:47.752 [INFO][4869] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" iface="eth0" netns="/var/run/netns/cni-fccd338a-ae11-9289-20c0-b7ccc3c8951f" Jan 17 00:07:47.820198 containerd[1672]: 2026-01-17 00:07:47.752 [INFO][4869] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" iface="eth0" netns="/var/run/netns/cni-fccd338a-ae11-9289-20c0-b7ccc3c8951f" Jan 17 00:07:47.820198 containerd[1672]: 2026-01-17 00:07:47.752 [INFO][4869] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" iface="eth0" netns="/var/run/netns/cni-fccd338a-ae11-9289-20c0-b7ccc3c8951f" Jan 17 00:07:47.820198 containerd[1672]: 2026-01-17 00:07:47.752 [INFO][4869] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" Jan 17 00:07:47.820198 containerd[1672]: 2026-01-17 00:07:47.752 [INFO][4869] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" Jan 17 00:07:47.820198 containerd[1672]: 2026-01-17 00:07:47.775 [INFO][4880] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" HandleID="k8s-pod-network.9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--kube--controllers--894f9f8d4--b5lgh-eth0" Jan 17 00:07:47.820198 containerd[1672]: 2026-01-17 00:07:47.777 [INFO][4880] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:07:47.820198 containerd[1672]: 2026-01-17 00:07:47.798 [INFO][4880] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:07:47.820198 containerd[1672]: 2026-01-17 00:07:47.814 [WARNING][4880] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" HandleID="k8s-pod-network.9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--kube--controllers--894f9f8d4--b5lgh-eth0" Jan 17 00:07:47.820198 containerd[1672]: 2026-01-17 00:07:47.814 [INFO][4880] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" HandleID="k8s-pod-network.9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--kube--controllers--894f9f8d4--b5lgh-eth0" Jan 17 00:07:47.820198 containerd[1672]: 2026-01-17 00:07:47.816 [INFO][4880] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:07:47.820198 containerd[1672]: 2026-01-17 00:07:47.818 [INFO][4869] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" Jan 17 00:07:47.820592 containerd[1672]: time="2026-01-17T00:07:47.820324692Z" level=info msg="TearDown network for sandbox \"9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca\" successfully" Jan 17 00:07:47.820592 containerd[1672]: time="2026-01-17T00:07:47.820351972Z" level=info msg="StopPodSandbox for \"9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca\" returns successfully" Jan 17 00:07:47.830544 containerd[1672]: time="2026-01-17T00:07:47.830476225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xzzqx,Uid:c75cd337-98e1-4c98-836d-ddd5677f5fcd,Namespace:kube-system,Attempt:1,}" Jan 17 00:07:47.878089 containerd[1672]: time="2026-01-17T00:07:47.877939967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-894f9f8d4-b5lgh,Uid:f59d9319-e335-4bfc-a026-d8bbe3696e81,Namespace:calico-system,Attempt:1,}" Jan 17 00:07:47.896068 systemd[1]: run-netns-cni\x2d3d39c9e0\x2d08a9\x2d4d21\x2d27e0\x2d34809922a0f0.mount: Deactivated successfully. Jan 17 00:07:47.896158 systemd[1]: run-netns-cni\x2dfccd338a\x2dae11\x2d9289\x2d20c0\x2db7ccc3c8951f.mount: Deactivated successfully. Jan 17 00:07:48.101738 systemd-networkd[1306]: cali038a4ac797f: Link UP Jan 17 00:07:48.102574 systemd-networkd[1306]: cali038a4ac797f: Gained carrier Jan 17 00:07:48.120293 containerd[1672]: 2026-01-17 00:07:48.014 [INFO][4892] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--mrl7f-eth0 calico-apiserver-57d7d85589- calico-apiserver 4d9310f4-1124-495b-a411-5323618ddd1d 948 0 2026-01-17 00:07:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57d7d85589 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-4c16a83c6c calico-apiserver-57d7d85589-mrl7f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali038a4ac797f [] [] }} ContainerID="9efc895de6148776b61761b1284b7376b9d09509ef9c7714d1837c791778b699" Namespace="calico-apiserver" Pod="calico-apiserver-57d7d85589-mrl7f" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--mrl7f-" Jan 17 00:07:48.120293 containerd[1672]: 2026-01-17 00:07:48.014 [INFO][4892] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9efc895de6148776b61761b1284b7376b9d09509ef9c7714d1837c791778b699" Namespace="calico-apiserver" Pod="calico-apiserver-57d7d85589-mrl7f" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--mrl7f-eth0" Jan 17 00:07:48.120293 containerd[1672]: 2026-01-17 00:07:48.040 [INFO][4904] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9efc895de6148776b61761b1284b7376b9d09509ef9c7714d1837c791778b699" HandleID="k8s-pod-network.9efc895de6148776b61761b1284b7376b9d09509ef9c7714d1837c791778b699" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--mrl7f-eth0" Jan 17 00:07:48.120293 containerd[1672]: 2026-01-17 00:07:48.041 [INFO][4904] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9efc895de6148776b61761b1284b7376b9d09509ef9c7714d1837c791778b699" HandleID="k8s-pod-network.9efc895de6148776b61761b1284b7376b9d09509ef9c7714d1837c791778b699" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--mrl7f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-4c16a83c6c", "pod":"calico-apiserver-57d7d85589-mrl7f", "timestamp":"2026-01-17 00:07:48.040961259 +0000 UTC"}, Hostname:"ci-4081.3.6-n-4c16a83c6c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:07:48.120293 containerd[1672]: 2026-01-17 00:07:48.041 [INFO][4904] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:07:48.120293 containerd[1672]: 2026-01-17 00:07:48.041 [INFO][4904] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:07:48.120293 containerd[1672]: 2026-01-17 00:07:48.041 [INFO][4904] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-4c16a83c6c' Jan 17 00:07:48.120293 containerd[1672]: 2026-01-17 00:07:48.050 [INFO][4904] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9efc895de6148776b61761b1284b7376b9d09509ef9c7714d1837c791778b699" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:48.120293 containerd[1672]: 2026-01-17 00:07:48.054 [INFO][4904] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:48.120293 containerd[1672]: 2026-01-17 00:07:48.058 [INFO][4904] ipam/ipam.go 511: Trying affinity for 192.168.12.64/26 host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:48.120293 containerd[1672]: 2026-01-17 00:07:48.060 [INFO][4904] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.64/26 host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:48.120293 containerd[1672]: 2026-01-17 00:07:48.062 [INFO][4904] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.64/26 host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:48.120293 containerd[1672]: 2026-01-17 00:07:48.062 [INFO][4904] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.64/26 handle="k8s-pod-network.9efc895de6148776b61761b1284b7376b9d09509ef9c7714d1837c791778b699" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:48.120293 containerd[1672]: 2026-01-17 00:07:48.065 [INFO][4904] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9efc895de6148776b61761b1284b7376b9d09509ef9c7714d1837c791778b699 Jan 17 00:07:48.120293 containerd[1672]: 2026-01-17 00:07:48.075 [INFO][4904] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.64/26 handle="k8s-pod-network.9efc895de6148776b61761b1284b7376b9d09509ef9c7714d1837c791778b699" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:48.120293 containerd[1672]: 2026-01-17 00:07:48.089 [INFO][4904] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.66/26] block=192.168.12.64/26 handle="k8s-pod-network.9efc895de6148776b61761b1284b7376b9d09509ef9c7714d1837c791778b699" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:48.120293 containerd[1672]: 2026-01-17 00:07:48.089 [INFO][4904] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.66/26] handle="k8s-pod-network.9efc895de6148776b61761b1284b7376b9d09509ef9c7714d1837c791778b699" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:48.120293 containerd[1672]: 2026-01-17 00:07:48.089 [INFO][4904] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:07:48.120293 containerd[1672]: 2026-01-17 00:07:48.089 [INFO][4904] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.66/26] IPv6=[] ContainerID="9efc895de6148776b61761b1284b7376b9d09509ef9c7714d1837c791778b699" HandleID="k8s-pod-network.9efc895de6148776b61761b1284b7376b9d09509ef9c7714d1837c791778b699" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--mrl7f-eth0" Jan 17 00:07:48.121172 containerd[1672]: 2026-01-17 00:07:48.097 [INFO][4892] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9efc895de6148776b61761b1284b7376b9d09509ef9c7714d1837c791778b699" Namespace="calico-apiserver" Pod="calico-apiserver-57d7d85589-mrl7f" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--mrl7f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--mrl7f-eth0", GenerateName:"calico-apiserver-57d7d85589-", Namespace:"calico-apiserver", SelfLink:"", UID:"4d9310f4-1124-495b-a411-5323618ddd1d", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57d7d85589", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4c16a83c6c", ContainerID:"", Pod:"calico-apiserver-57d7d85589-mrl7f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali038a4ac797f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:07:48.121172 containerd[1672]: 2026-01-17 00:07:48.097 [INFO][4892] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.66/32] ContainerID="9efc895de6148776b61761b1284b7376b9d09509ef9c7714d1837c791778b699" Namespace="calico-apiserver" Pod="calico-apiserver-57d7d85589-mrl7f" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--mrl7f-eth0" Jan 17 00:07:48.121172 containerd[1672]: 2026-01-17 00:07:48.097 [INFO][4892] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali038a4ac797f ContainerID="9efc895de6148776b61761b1284b7376b9d09509ef9c7714d1837c791778b699" Namespace="calico-apiserver" Pod="calico-apiserver-57d7d85589-mrl7f" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--mrl7f-eth0" Jan 17 00:07:48.121172 containerd[1672]: 2026-01-17 00:07:48.104 [INFO][4892] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9efc895de6148776b61761b1284b7376b9d09509ef9c7714d1837c791778b699" Namespace="calico-apiserver" Pod="calico-apiserver-57d7d85589-mrl7f" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--mrl7f-eth0" Jan 17 00:07:48.121172 containerd[1672]: 2026-01-17 00:07:48.104 [INFO][4892] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9efc895de6148776b61761b1284b7376b9d09509ef9c7714d1837c791778b699" Namespace="calico-apiserver" Pod="calico-apiserver-57d7d85589-mrl7f" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--mrl7f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--mrl7f-eth0", GenerateName:"calico-apiserver-57d7d85589-", Namespace:"calico-apiserver", SelfLink:"", UID:"4d9310f4-1124-495b-a411-5323618ddd1d", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57d7d85589", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4c16a83c6c", ContainerID:"9efc895de6148776b61761b1284b7376b9d09509ef9c7714d1837c791778b699", Pod:"calico-apiserver-57d7d85589-mrl7f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali038a4ac797f", MAC:"4e:d7:8f:de:64:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:07:48.121172 containerd[1672]: 2026-01-17 00:07:48.116 [INFO][4892] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9efc895de6148776b61761b1284b7376b9d09509ef9c7714d1837c791778b699" Namespace="calico-apiserver" Pod="calico-apiserver-57d7d85589-mrl7f" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--mrl7f-eth0" Jan 17 00:07:48.464177 systemd-networkd[1306]: calif6fe12a186f: Link UP Jan 17 00:07:48.465355 systemd-networkd[1306]: calif6fe12a186f: Gained carrier Jan 17 00:07:48.485899 containerd[1672]: 2026-01-17 00:07:48.363 [INFO][4919] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--ght5v-eth0 calico-apiserver-57d7d85589- calico-apiserver bcc0dcb5-6cc0-4aca-b131-0866d93b8e20 946 0 2026-01-17 00:07:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57d7d85589 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-4c16a83c6c calico-apiserver-57d7d85589-ght5v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif6fe12a186f [] [] }} ContainerID="573c49c00a51309558d10f47f8bedf9a8331b5559b86f422946aace776c4dd2a" Namespace="calico-apiserver" Pod="calico-apiserver-57d7d85589-ght5v" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--ght5v-" Jan 17 00:07:48.485899 containerd[1672]: 2026-01-17 00:07:48.363 [INFO][4919] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="573c49c00a51309558d10f47f8bedf9a8331b5559b86f422946aace776c4dd2a" Namespace="calico-apiserver" Pod="calico-apiserver-57d7d85589-ght5v" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--ght5v-eth0" Jan 17 00:07:48.485899 containerd[1672]: 2026-01-17 00:07:48.405 [INFO][4937] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="573c49c00a51309558d10f47f8bedf9a8331b5559b86f422946aace776c4dd2a" HandleID="k8s-pod-network.573c49c00a51309558d10f47f8bedf9a8331b5559b86f422946aace776c4dd2a" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--ght5v-eth0" Jan 17 00:07:48.485899 containerd[1672]: 2026-01-17 00:07:48.405 [INFO][4937] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="573c49c00a51309558d10f47f8bedf9a8331b5559b86f422946aace776c4dd2a" HandleID="k8s-pod-network.573c49c00a51309558d10f47f8bedf9a8331b5559b86f422946aace776c4dd2a" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--ght5v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cafe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-4c16a83c6c", "pod":"calico-apiserver-57d7d85589-ght5v", "timestamp":"2026-01-17 00:07:48.405074172 +0000 UTC"}, Hostname:"ci-4081.3.6-n-4c16a83c6c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:07:48.485899 containerd[1672]: 2026-01-17 00:07:48.405 [INFO][4937] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:07:48.485899 containerd[1672]: 2026-01-17 00:07:48.405 [INFO][4937] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:07:48.485899 containerd[1672]: 2026-01-17 00:07:48.405 [INFO][4937] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-4c16a83c6c' Jan 17 00:07:48.485899 containerd[1672]: 2026-01-17 00:07:48.424 [INFO][4937] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.573c49c00a51309558d10f47f8bedf9a8331b5559b86f422946aace776c4dd2a" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:48.485899 containerd[1672]: 2026-01-17 00:07:48.429 [INFO][4937] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:48.485899 containerd[1672]: 2026-01-17 00:07:48.433 [INFO][4937] ipam/ipam.go 511: Trying affinity for 192.168.12.64/26 host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:48.485899 containerd[1672]: 2026-01-17 00:07:48.435 [INFO][4937] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.64/26 host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:48.485899 containerd[1672]: 2026-01-17 00:07:48.437 [INFO][4937] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.64/26 host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:48.485899 containerd[1672]: 2026-01-17 00:07:48.437 [INFO][4937] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.64/26 handle="k8s-pod-network.573c49c00a51309558d10f47f8bedf9a8331b5559b86f422946aace776c4dd2a" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:48.485899 containerd[1672]: 2026-01-17 00:07:48.439 [INFO][4937] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.573c49c00a51309558d10f47f8bedf9a8331b5559b86f422946aace776c4dd2a Jan 17 00:07:48.485899 containerd[1672]: 2026-01-17 00:07:48.447 [INFO][4937] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.64/26 handle="k8s-pod-network.573c49c00a51309558d10f47f8bedf9a8331b5559b86f422946aace776c4dd2a" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:48.485899 containerd[1672]: 2026-01-17 00:07:48.453 [INFO][4937] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.67/26] block=192.168.12.64/26 handle="k8s-pod-network.573c49c00a51309558d10f47f8bedf9a8331b5559b86f422946aace776c4dd2a" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:48.485899 containerd[1672]: 2026-01-17 00:07:48.453 [INFO][4937] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.67/26] handle="k8s-pod-network.573c49c00a51309558d10f47f8bedf9a8331b5559b86f422946aace776c4dd2a" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:48.485899 containerd[1672]: 2026-01-17 00:07:48.453 [INFO][4937] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:07:48.485899 containerd[1672]: 2026-01-17 00:07:48.453 [INFO][4937] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.67/26] IPv6=[] ContainerID="573c49c00a51309558d10f47f8bedf9a8331b5559b86f422946aace776c4dd2a" HandleID="k8s-pod-network.573c49c00a51309558d10f47f8bedf9a8331b5559b86f422946aace776c4dd2a" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--ght5v-eth0" Jan 17 00:07:48.486489 containerd[1672]: 2026-01-17 00:07:48.458 [INFO][4919] cni-plugin/k8s.go 418: Populated endpoint ContainerID="573c49c00a51309558d10f47f8bedf9a8331b5559b86f422946aace776c4dd2a" Namespace="calico-apiserver" Pod="calico-apiserver-57d7d85589-ght5v" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--ght5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--ght5v-eth0", GenerateName:"calico-apiserver-57d7d85589-", Namespace:"calico-apiserver", SelfLink:"", UID:"bcc0dcb5-6cc0-4aca-b131-0866d93b8e20", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57d7d85589", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4c16a83c6c", ContainerID:"", Pod:"calico-apiserver-57d7d85589-ght5v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif6fe12a186f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:07:48.486489 containerd[1672]: 2026-01-17 00:07:48.459 [INFO][4919] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.67/32] ContainerID="573c49c00a51309558d10f47f8bedf9a8331b5559b86f422946aace776c4dd2a" Namespace="calico-apiserver" Pod="calico-apiserver-57d7d85589-ght5v" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--ght5v-eth0" Jan 17 00:07:48.486489 containerd[1672]: 2026-01-17 00:07:48.459 [INFO][4919] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif6fe12a186f ContainerID="573c49c00a51309558d10f47f8bedf9a8331b5559b86f422946aace776c4dd2a" Namespace="calico-apiserver" Pod="calico-apiserver-57d7d85589-ght5v" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--ght5v-eth0" Jan 17 00:07:48.486489 containerd[1672]: 2026-01-17 00:07:48.463 [INFO][4919] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="573c49c00a51309558d10f47f8bedf9a8331b5559b86f422946aace776c4dd2a" Namespace="calico-apiserver" Pod="calico-apiserver-57d7d85589-ght5v" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--ght5v-eth0" Jan 17 00:07:48.486489 containerd[1672]: 2026-01-17 00:07:48.464 [INFO][4919] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="573c49c00a51309558d10f47f8bedf9a8331b5559b86f422946aace776c4dd2a" Namespace="calico-apiserver" Pod="calico-apiserver-57d7d85589-ght5v" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--ght5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--ght5v-eth0", GenerateName:"calico-apiserver-57d7d85589-", Namespace:"calico-apiserver", SelfLink:"", UID:"bcc0dcb5-6cc0-4aca-b131-0866d93b8e20", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57d7d85589", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4c16a83c6c", ContainerID:"573c49c00a51309558d10f47f8bedf9a8331b5559b86f422946aace776c4dd2a", Pod:"calico-apiserver-57d7d85589-ght5v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif6fe12a186f", MAC:"ce:81:09:c0:4a:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:07:48.486489 containerd[1672]: 2026-01-17 00:07:48.482 [INFO][4919] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="573c49c00a51309558d10f47f8bedf9a8331b5559b86f422946aace776c4dd2a" Namespace="calico-apiserver" Pod="calico-apiserver-57d7d85589-ght5v" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--ght5v-eth0" Jan 17 00:07:48.500311 containerd[1672]: time="2026-01-17T00:07:48.500211495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:07:48.503101 containerd[1672]: time="2026-01-17T00:07:48.500325695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:07:48.503101 containerd[1672]: time="2026-01-17T00:07:48.500362575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:07:48.503101 containerd[1672]: time="2026-01-17T00:07:48.500760616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:07:48.521201 systemd[1]: Started cri-containerd-9efc895de6148776b61761b1284b7376b9d09509ef9c7714d1837c791778b699.scope - libcontainer container 9efc895de6148776b61761b1284b7376b9d09509ef9c7714d1837c791778b699. Jan 17 00:07:48.563309 containerd[1672]: time="2026-01-17T00:07:48.563261617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d7d85589-mrl7f,Uid:4d9310f4-1124-495b-a411-5323618ddd1d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9efc895de6148776b61761b1284b7376b9d09509ef9c7714d1837c791778b699\"" Jan 17 00:07:48.566999 containerd[1672]: time="2026-01-17T00:07:48.566432501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:07:48.578167 systemd-networkd[1306]: calida7d340b51f: Link UP Jan 17 00:07:48.578363 systemd-networkd[1306]: calida7d340b51f: Gained carrier Jan 17 00:07:48.603079 containerd[1672]: 2026-01-17 00:07:48.417 [INFO][4931] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--4c16a83c6c-k8s-csi--node--driver--v4lqg-eth0 csi-node-driver- calico-system b1f66b76-7db3-449d-92fa-faa5ceccc08b 947 0 2026-01-17 00:07:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-4c16a83c6c csi-node-driver-v4lqg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calida7d340b51f [] [] }} ContainerID="26b4a812d546f91a2d81e79ddb558c9bd04a08f1e134de90ffabcbfd7e50b41c" Namespace="calico-system" Pod="csi-node-driver-v4lqg" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-csi--node--driver--v4lqg-" Jan 17 00:07:48.603079 containerd[1672]: 2026-01-17 00:07:48.418 [INFO][4931] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="26b4a812d546f91a2d81e79ddb558c9bd04a08f1e134de90ffabcbfd7e50b41c" Namespace="calico-system" Pod="csi-node-driver-v4lqg" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-csi--node--driver--v4lqg-eth0" Jan 17 00:07:48.603079 containerd[1672]: 2026-01-17 00:07:48.450 [INFO][4951] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="26b4a812d546f91a2d81e79ddb558c9bd04a08f1e134de90ffabcbfd7e50b41c" HandleID="k8s-pod-network.26b4a812d546f91a2d81e79ddb558c9bd04a08f1e134de90ffabcbfd7e50b41c" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-csi--node--driver--v4lqg-eth0" Jan 17 00:07:48.603079 containerd[1672]: 2026-01-17 00:07:48.450 [INFO][4951] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="26b4a812d546f91a2d81e79ddb558c9bd04a08f1e134de90ffabcbfd7e50b41c" HandleID="k8s-pod-network.26b4a812d546f91a2d81e79ddb558c9bd04a08f1e134de90ffabcbfd7e50b41c" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-csi--node--driver--v4lqg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3660), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-4c16a83c6c", "pod":"csi-node-driver-v4lqg", "timestamp":"2026-01-17 00:07:48.45043379 +0000 UTC"}, Hostname:"ci-4081.3.6-n-4c16a83c6c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:07:48.603079 containerd[1672]: 2026-01-17 00:07:48.450 [INFO][4951] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:07:48.603079 containerd[1672]: 2026-01-17 00:07:48.453 [INFO][4951] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:07:48.603079 containerd[1672]: 2026-01-17 00:07:48.453 [INFO][4951] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-4c16a83c6c' Jan 17 00:07:48.603079 containerd[1672]: 2026-01-17 00:07:48.527 [INFO][4951] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.26b4a812d546f91a2d81e79ddb558c9bd04a08f1e134de90ffabcbfd7e50b41c" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:48.603079 containerd[1672]: 2026-01-17 00:07:48.532 [INFO][4951] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:48.603079 containerd[1672]: 2026-01-17 00:07:48.538 [INFO][4951] ipam/ipam.go 511: Trying affinity for 192.168.12.64/26 host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:48.603079 containerd[1672]: 2026-01-17 00:07:48.541 [INFO][4951] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.64/26 host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:48.603079 containerd[1672]: 2026-01-17 00:07:48.544 [INFO][4951] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.64/26 host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:48.603079 containerd[1672]: 2026-01-17 00:07:48.544 [INFO][4951] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.64/26 handle="k8s-pod-network.26b4a812d546f91a2d81e79ddb558c9bd04a08f1e134de90ffabcbfd7e50b41c" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:48.603079 containerd[1672]: 2026-01-17 00:07:48.545 [INFO][4951] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.26b4a812d546f91a2d81e79ddb558c9bd04a08f1e134de90ffabcbfd7e50b41c Jan 17 00:07:48.603079 containerd[1672]: 2026-01-17 00:07:48.552 [INFO][4951] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.64/26 handle="k8s-pod-network.26b4a812d546f91a2d81e79ddb558c9bd04a08f1e134de90ffabcbfd7e50b41c" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:48.603079 containerd[1672]: 2026-01-17 00:07:48.565 [INFO][4951] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.68/26] block=192.168.12.64/26 handle="k8s-pod-network.26b4a812d546f91a2d81e79ddb558c9bd04a08f1e134de90ffabcbfd7e50b41c" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:48.603079 containerd[1672]: 2026-01-17 00:07:48.565 [INFO][4951] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.68/26] handle="k8s-pod-network.26b4a812d546f91a2d81e79ddb558c9bd04a08f1e134de90ffabcbfd7e50b41c" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:48.603079 containerd[1672]: 2026-01-17 00:07:48.565 [INFO][4951] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:07:48.603079 containerd[1672]: 2026-01-17 00:07:48.567 [INFO][4951] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.68/26] IPv6=[] ContainerID="26b4a812d546f91a2d81e79ddb558c9bd04a08f1e134de90ffabcbfd7e50b41c" HandleID="k8s-pod-network.26b4a812d546f91a2d81e79ddb558c9bd04a08f1e134de90ffabcbfd7e50b41c" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-csi--node--driver--v4lqg-eth0" Jan 17 00:07:48.603613 containerd[1672]: 2026-01-17 00:07:48.573 [INFO][4931] cni-plugin/k8s.go 418: Populated endpoint ContainerID="26b4a812d546f91a2d81e79ddb558c9bd04a08f1e134de90ffabcbfd7e50b41c" Namespace="calico-system" Pod="csi-node-driver-v4lqg" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-csi--node--driver--v4lqg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4c16a83c6c-k8s-csi--node--driver--v4lqg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b1f66b76-7db3-449d-92fa-faa5ceccc08b", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 7, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4c16a83c6c", ContainerID:"", Pod:"csi-node-driver-v4lqg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.12.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calida7d340b51f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:07:48.603613 containerd[1672]: 2026-01-17 00:07:48.573 [INFO][4931] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.68/32] ContainerID="26b4a812d546f91a2d81e79ddb558c9bd04a08f1e134de90ffabcbfd7e50b41c" Namespace="calico-system" Pod="csi-node-driver-v4lqg" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-csi--node--driver--v4lqg-eth0" Jan 17 00:07:48.603613 containerd[1672]: 2026-01-17 00:07:48.573 [INFO][4931] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida7d340b51f ContainerID="26b4a812d546f91a2d81e79ddb558c9bd04a08f1e134de90ffabcbfd7e50b41c" Namespace="calico-system" Pod="csi-node-driver-v4lqg" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-csi--node--driver--v4lqg-eth0" Jan 17 00:07:48.603613 containerd[1672]: 2026-01-17 00:07:48.579 [INFO][4931] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="26b4a812d546f91a2d81e79ddb558c9bd04a08f1e134de90ffabcbfd7e50b41c" Namespace="calico-system" Pod="csi-node-driver-v4lqg" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-csi--node--driver--v4lqg-eth0" Jan 17 00:07:48.603613 containerd[1672]: 2026-01-17 00:07:48.580 [INFO][4931] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="26b4a812d546f91a2d81e79ddb558c9bd04a08f1e134de90ffabcbfd7e50b41c" Namespace="calico-system" Pod="csi-node-driver-v4lqg" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-csi--node--driver--v4lqg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4c16a83c6c-k8s-csi--node--driver--v4lqg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b1f66b76-7db3-449d-92fa-faa5ceccc08b", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 7, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4c16a83c6c", ContainerID:"26b4a812d546f91a2d81e79ddb558c9bd04a08f1e134de90ffabcbfd7e50b41c", Pod:"csi-node-driver-v4lqg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.12.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calida7d340b51f", MAC:"fa:a6:46:d3:b0:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:07:48.603613 containerd[1672]: 2026-01-17 00:07:48.599 [INFO][4931] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="26b4a812d546f91a2d81e79ddb558c9bd04a08f1e134de90ffabcbfd7e50b41c" Namespace="calico-system" Pod="csi-node-driver-v4lqg" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-csi--node--driver--v4lqg-eth0" Jan 17 00:07:48.750720 containerd[1672]: time="2026-01-17T00:07:48.750516060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:07:48.750720 containerd[1672]: time="2026-01-17T00:07:48.750569340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:07:48.750720 containerd[1672]: time="2026-01-17T00:07:48.750580340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:07:48.751111 containerd[1672]: time="2026-01-17T00:07:48.750671500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:07:48.770267 systemd[1]: Started cri-containerd-573c49c00a51309558d10f47f8bedf9a8331b5559b86f422946aace776c4dd2a.scope - libcontainer container 573c49c00a51309558d10f47f8bedf9a8331b5559b86f422946aace776c4dd2a. Jan 17 00:07:48.805454 containerd[1672]: time="2026-01-17T00:07:48.805414212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d7d85589-ght5v,Uid:bcc0dcb5-6cc0-4aca-b131-0866d93b8e20,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"573c49c00a51309558d10f47f8bedf9a8331b5559b86f422946aace776c4dd2a\"" Jan 17 00:07:48.884729 containerd[1672]: time="2026-01-17T00:07:48.884585080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:07:48.884729 containerd[1672]: time="2026-01-17T00:07:48.884632920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:07:48.884729 containerd[1672]: time="2026-01-17T00:07:48.884643560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:07:48.885007 containerd[1672]: time="2026-01-17T00:07:48.884713720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:07:48.909245 systemd[1]: Started cri-containerd-26b4a812d546f91a2d81e79ddb558c9bd04a08f1e134de90ffabcbfd7e50b41c.scope - libcontainer container 26b4a812d546f91a2d81e79ddb558c9bd04a08f1e134de90ffabcbfd7e50b41c. Jan 17 00:07:48.940862 containerd[1672]: time="2026-01-17T00:07:48.940795876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v4lqg,Uid:b1f66b76-7db3-449d-92fa-faa5ceccc08b,Namespace:calico-system,Attempt:1,} returns sandbox id \"26b4a812d546f91a2d81e79ddb558c9bd04a08f1e134de90ffabcbfd7e50b41c\"" Jan 17 00:07:49.053683 containerd[1672]: time="2026-01-17T00:07:49.053173870Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:07:49.130670 containerd[1672]: time="2026-01-17T00:07:49.130522295Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:07:49.130670 containerd[1672]: time="2026-01-17T00:07:49.130644455Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:07:49.131887 kubelet[3141]: E0117 00:07:49.131100 3141 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:07:49.131887 kubelet[3141]: E0117 00:07:49.131147 3141 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:07:49.131887 kubelet[3141]: E0117 00:07:49.131409 3141 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-57d7d85589-mrl7f_calico-apiserver(4d9310f4-1124-495b-a411-5323618ddd1d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:07:49.131887 kubelet[3141]: E0117 00:07:49.131455 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57d7d85589-mrl7f" podUID="4d9310f4-1124-495b-a411-5323618ddd1d" Jan 17 00:07:49.133931 containerd[1672]: time="2026-01-17T00:07:49.132983578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:07:49.256265 systemd-networkd[1306]: caliba2a4c5921c: Link UP Jan 17 00:07:49.262267 systemd-networkd[1306]: caliba2a4c5921c: Gained carrier Jan 17 00:07:49.286710 containerd[1672]: 2026-01-17 00:07:49.109 [INFO][5101] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--xzzqx-eth0 coredns-66bc5c9577- kube-system c75cd337-98e1-4c98-836d-ddd5677f5fcd 960 0 2026-01-17 00:06:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-4c16a83c6c coredns-66bc5c9577-xzzqx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliba2a4c5921c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="2fadd6c6910dacdb6e70d1ac98a04df5a4f4f62ab0418a5ec40f1be93c0d9db2" Namespace="kube-system" Pod="coredns-66bc5c9577-xzzqx" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--xzzqx-" Jan 17 00:07:49.286710 containerd[1672]: 2026-01-17 00:07:49.109 [INFO][5101] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2fadd6c6910dacdb6e70d1ac98a04df5a4f4f62ab0418a5ec40f1be93c0d9db2" Namespace="kube-system" Pod="coredns-66bc5c9577-xzzqx" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--xzzqx-eth0" Jan 17 00:07:49.286710 containerd[1672]: 2026-01-17 00:07:49.156 [INFO][5113] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2fadd6c6910dacdb6e70d1ac98a04df5a4f4f62ab0418a5ec40f1be93c0d9db2" HandleID="k8s-pod-network.2fadd6c6910dacdb6e70d1ac98a04df5a4f4f62ab0418a5ec40f1be93c0d9db2" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--xzzqx-eth0" Jan 17 00:07:49.286710 containerd[1672]: 2026-01-17 00:07:49.157 [INFO][5113] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2fadd6c6910dacdb6e70d1ac98a04df5a4f4f62ab0418a5ec40f1be93c0d9db2" HandleID="k8s-pod-network.2fadd6c6910dacdb6e70d1ac98a04df5a4f4f62ab0418a5ec40f1be93c0d9db2" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--xzzqx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b5d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-4c16a83c6c", "pod":"coredns-66bc5c9577-xzzqx", "timestamp":"2026-01-17 00:07:49.156795491 +0000 UTC"}, Hostname:"ci-4081.3.6-n-4c16a83c6c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:07:49.286710 containerd[1672]: 2026-01-17 00:07:49.157 [INFO][5113] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:07:49.286710 containerd[1672]: 2026-01-17 00:07:49.157 [INFO][5113] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:07:49.286710 containerd[1672]: 2026-01-17 00:07:49.157 [INFO][5113] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-4c16a83c6c' Jan 17 00:07:49.286710 containerd[1672]: 2026-01-17 00:07:49.177 [INFO][5113] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2fadd6c6910dacdb6e70d1ac98a04df5a4f4f62ab0418a5ec40f1be93c0d9db2" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:49.286710 containerd[1672]: 2026-01-17 00:07:49.191 [INFO][5113] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:49.286710 containerd[1672]: 2026-01-17 00:07:49.203 [INFO][5113] ipam/ipam.go 511: Trying affinity for 192.168.12.64/26 host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:49.286710 containerd[1672]: 2026-01-17 00:07:49.206 [INFO][5113] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.64/26 host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:49.286710 containerd[1672]: 2026-01-17 00:07:49.208 [INFO][5113] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.64/26 host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:49.286710 containerd[1672]: 2026-01-17 00:07:49.208 [INFO][5113] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.64/26 handle="k8s-pod-network.2fadd6c6910dacdb6e70d1ac98a04df5a4f4f62ab0418a5ec40f1be93c0d9db2" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:49.286710 containerd[1672]: 2026-01-17 00:07:49.215 [INFO][5113] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2fadd6c6910dacdb6e70d1ac98a04df5a4f4f62ab0418a5ec40f1be93c0d9db2 Jan 17 00:07:49.286710 containerd[1672]: 2026-01-17 00:07:49.226 [INFO][5113] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.64/26 handle="k8s-pod-network.2fadd6c6910dacdb6e70d1ac98a04df5a4f4f62ab0418a5ec40f1be93c0d9db2" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:49.286710 containerd[1672]: 2026-01-17 00:07:49.241 [INFO][5113] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.69/26] block=192.168.12.64/26 handle="k8s-pod-network.2fadd6c6910dacdb6e70d1ac98a04df5a4f4f62ab0418a5ec40f1be93c0d9db2" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:49.286710 containerd[1672]: 2026-01-17 00:07:49.241 [INFO][5113] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.69/26] handle="k8s-pod-network.2fadd6c6910dacdb6e70d1ac98a04df5a4f4f62ab0418a5ec40f1be93c0d9db2" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:49.286710 containerd[1672]: 2026-01-17 00:07:49.241 [INFO][5113] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:07:49.286710 containerd[1672]: 2026-01-17 00:07:49.242 [INFO][5113] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.69/26] IPv6=[] ContainerID="2fadd6c6910dacdb6e70d1ac98a04df5a4f4f62ab0418a5ec40f1be93c0d9db2" HandleID="k8s-pod-network.2fadd6c6910dacdb6e70d1ac98a04df5a4f4f62ab0418a5ec40f1be93c0d9db2" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--xzzqx-eth0" Jan 17 00:07:49.287557 containerd[1672]: 2026-01-17 00:07:49.250 [INFO][5101] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2fadd6c6910dacdb6e70d1ac98a04df5a4f4f62ab0418a5ec40f1be93c0d9db2" Namespace="kube-system" Pod="coredns-66bc5c9577-xzzqx" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--xzzqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--xzzqx-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c75cd337-98e1-4c98-836d-ddd5677f5fcd", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 6, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4c16a83c6c", ContainerID:"", Pod:"coredns-66bc5c9577-xzzqx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliba2a4c5921c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:07:49.287557 containerd[1672]: 2026-01-17 00:07:49.250 [INFO][5101] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.69/32] ContainerID="2fadd6c6910dacdb6e70d1ac98a04df5a4f4f62ab0418a5ec40f1be93c0d9db2" Namespace="kube-system" Pod="coredns-66bc5c9577-xzzqx" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--xzzqx-eth0" Jan 17 00:07:49.287557 containerd[1672]: 2026-01-17 00:07:49.250 [INFO][5101] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliba2a4c5921c ContainerID="2fadd6c6910dacdb6e70d1ac98a04df5a4f4f62ab0418a5ec40f1be93c0d9db2" Namespace="kube-system" Pod="coredns-66bc5c9577-xzzqx" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--xzzqx-eth0" Jan 17 00:07:49.287557 containerd[1672]: 2026-01-17 00:07:49.262 [INFO][5101] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2fadd6c6910dacdb6e70d1ac98a04df5a4f4f62ab0418a5ec40f1be93c0d9db2" Namespace="kube-system" Pod="coredns-66bc5c9577-xzzqx" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--xzzqx-eth0" Jan 17 00:07:49.287557 containerd[1672]: 2026-01-17 00:07:49.265 [INFO][5101] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2fadd6c6910dacdb6e70d1ac98a04df5a4f4f62ab0418a5ec40f1be93c0d9db2" Namespace="kube-system" Pod="coredns-66bc5c9577-xzzqx" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--xzzqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--xzzqx-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c75cd337-98e1-4c98-836d-ddd5677f5fcd", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 6, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4c16a83c6c", ContainerID:"2fadd6c6910dacdb6e70d1ac98a04df5a4f4f62ab0418a5ec40f1be93c0d9db2", Pod:"coredns-66bc5c9577-xzzqx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliba2a4c5921c", MAC:"0e:52:45:9f:1f:f9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:07:49.287749 containerd[1672]: 2026-01-17 00:07:49.283 [INFO][5101] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2fadd6c6910dacdb6e70d1ac98a04df5a4f4f62ab0418a5ec40f1be93c0d9db2" Namespace="kube-system" Pod="coredns-66bc5c9577-xzzqx" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--xzzqx-eth0" Jan 17 00:07:49.325592 containerd[1672]: time="2026-01-17T00:07:49.323260598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:07:49.325592 containerd[1672]: time="2026-01-17T00:07:49.323359518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:07:49.325592 containerd[1672]: time="2026-01-17T00:07:49.323397998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:07:49.325592 containerd[1672]: time="2026-01-17T00:07:49.323516398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:07:49.349233 systemd[1]: Started cri-containerd-2fadd6c6910dacdb6e70d1ac98a04df5a4f4f62ab0418a5ec40f1be93c0d9db2.scope - libcontainer container 2fadd6c6910dacdb6e70d1ac98a04df5a4f4f62ab0418a5ec40f1be93c0d9db2. Jan 17 00:07:49.357875 systemd-networkd[1306]: cali73bfb212120: Link UP Jan 17 00:07:49.359747 systemd-networkd[1306]: cali73bfb212120: Gained carrier Jan 17 00:07:49.395622 containerd[1672]: time="2026-01-17T00:07:49.395576696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-xzzqx,Uid:c75cd337-98e1-4c98-836d-ddd5677f5fcd,Namespace:kube-system,Attempt:1,} returns sandbox id \"2fadd6c6910dacdb6e70d1ac98a04df5a4f4f62ab0418a5ec40f1be93c0d9db2\"" Jan 17 00:07:49.399098 containerd[1672]: 2026-01-17 00:07:49.200 [INFO][5122] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--4c16a83c6c-k8s-calico--kube--controllers--894f9f8d4--b5lgh-eth0 calico-kube-controllers-894f9f8d4- calico-system f59d9319-e335-4bfc-a026-d8bbe3696e81 961 0 2026-01-17 00:07:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:894f9f8d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-4c16a83c6c calico-kube-controllers-894f9f8d4-b5lgh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali73bfb212120 [] [] }} ContainerID="b4cb5c82385090c5ce990a7dc7baf64e5f8e7a3972c7e3d5172ec19bc844939f" Namespace="calico-system" Pod="calico-kube-controllers-894f9f8d4-b5lgh" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-calico--kube--controllers--894f9f8d4--b5lgh-" Jan 17 00:07:49.399098 containerd[1672]: 2026-01-17 00:07:49.201 [INFO][5122] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b4cb5c82385090c5ce990a7dc7baf64e5f8e7a3972c7e3d5172ec19bc844939f" Namespace="calico-system" Pod="calico-kube-controllers-894f9f8d4-b5lgh" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-calico--kube--controllers--894f9f8d4--b5lgh-eth0" Jan 17 00:07:49.399098 containerd[1672]: 2026-01-17 00:07:49.250 [INFO][5132] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4cb5c82385090c5ce990a7dc7baf64e5f8e7a3972c7e3d5172ec19bc844939f" HandleID="k8s-pod-network.b4cb5c82385090c5ce990a7dc7baf64e5f8e7a3972c7e3d5172ec19bc844939f" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--kube--controllers--894f9f8d4--b5lgh-eth0" Jan 17 00:07:49.399098 containerd[1672]: 2026-01-17 00:07:49.250 [INFO][5132] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b4cb5c82385090c5ce990a7dc7baf64e5f8e7a3972c7e3d5172ec19bc844939f" HandleID="k8s-pod-network.b4cb5c82385090c5ce990a7dc7baf64e5f8e7a3972c7e3d5172ec19bc844939f" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--kube--controllers--894f9f8d4--b5lgh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032b750), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-4c16a83c6c", "pod":"calico-kube-controllers-894f9f8d4-b5lgh", "timestamp":"2026-01-17 00:07:49.250637379 +0000 UTC"}, Hostname:"ci-4081.3.6-n-4c16a83c6c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:07:49.399098 containerd[1672]: 2026-01-17 00:07:49.250 [INFO][5132] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:07:49.399098 containerd[1672]: 2026-01-17 00:07:49.251 [INFO][5132] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:07:49.399098 containerd[1672]: 2026-01-17 00:07:49.251 [INFO][5132] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-4c16a83c6c' Jan 17 00:07:49.399098 containerd[1672]: 2026-01-17 00:07:49.276 [INFO][5132] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b4cb5c82385090c5ce990a7dc7baf64e5f8e7a3972c7e3d5172ec19bc844939f" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:49.399098 containerd[1672]: 2026-01-17 00:07:49.292 [INFO][5132] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:49.399098 containerd[1672]: 2026-01-17 00:07:49.302 [INFO][5132] ipam/ipam.go 511: Trying affinity for 192.168.12.64/26 host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:49.399098 containerd[1672]: 2026-01-17 00:07:49.309 [INFO][5132] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.64/26 host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:49.399098 containerd[1672]: 2026-01-17 00:07:49.316 [INFO][5132] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.64/26 host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:49.399098 containerd[1672]: 2026-01-17 00:07:49.316 [INFO][5132] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.64/26 handle="k8s-pod-network.b4cb5c82385090c5ce990a7dc7baf64e5f8e7a3972c7e3d5172ec19bc844939f" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:49.399098 containerd[1672]: 2026-01-17 00:07:49.321 [INFO][5132] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b4cb5c82385090c5ce990a7dc7baf64e5f8e7a3972c7e3d5172ec19bc844939f Jan 17 00:07:49.399098 containerd[1672]: 2026-01-17 00:07:49.331 [INFO][5132] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.64/26 handle="k8s-pod-network.b4cb5c82385090c5ce990a7dc7baf64e5f8e7a3972c7e3d5172ec19bc844939f" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:49.399098 containerd[1672]: 2026-01-17 00:07:49.345 [INFO][5132] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.70/26] block=192.168.12.64/26 handle="k8s-pod-network.b4cb5c82385090c5ce990a7dc7baf64e5f8e7a3972c7e3d5172ec19bc844939f" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:49.399098 containerd[1672]: 2026-01-17 00:07:49.345 [INFO][5132] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.70/26] handle="k8s-pod-network.b4cb5c82385090c5ce990a7dc7baf64e5f8e7a3972c7e3d5172ec19bc844939f" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:49.399098 containerd[1672]: 2026-01-17 00:07:49.345 [INFO][5132] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:07:49.399098 containerd[1672]: 2026-01-17 00:07:49.345 [INFO][5132] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.70/26] IPv6=[] ContainerID="b4cb5c82385090c5ce990a7dc7baf64e5f8e7a3972c7e3d5172ec19bc844939f" HandleID="k8s-pod-network.b4cb5c82385090c5ce990a7dc7baf64e5f8e7a3972c7e3d5172ec19bc844939f" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--kube--controllers--894f9f8d4--b5lgh-eth0" Jan 17 00:07:49.400112 containerd[1672]: 2026-01-17 00:07:49.351 [INFO][5122] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b4cb5c82385090c5ce990a7dc7baf64e5f8e7a3972c7e3d5172ec19bc844939f" Namespace="calico-system" Pod="calico-kube-controllers-894f9f8d4-b5lgh" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-calico--kube--controllers--894f9f8d4--b5lgh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4c16a83c6c-k8s-calico--kube--controllers--894f9f8d4--b5lgh-eth0", GenerateName:"calico-kube-controllers-894f9f8d4-", Namespace:"calico-system", SelfLink:"", UID:"f59d9319-e335-4bfc-a026-d8bbe3696e81", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 7, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"894f9f8d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4c16a83c6c", ContainerID:"", Pod:"calico-kube-controllers-894f9f8d4-b5lgh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali73bfb212120", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:07:49.400112 containerd[1672]: 2026-01-17 00:07:49.351 [INFO][5122] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.70/32] ContainerID="b4cb5c82385090c5ce990a7dc7baf64e5f8e7a3972c7e3d5172ec19bc844939f" Namespace="calico-system" Pod="calico-kube-controllers-894f9f8d4-b5lgh" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-calico--kube--controllers--894f9f8d4--b5lgh-eth0" Jan 17 00:07:49.400112 containerd[1672]: 2026-01-17 00:07:49.351 [INFO][5122] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali73bfb212120 ContainerID="b4cb5c82385090c5ce990a7dc7baf64e5f8e7a3972c7e3d5172ec19bc844939f" Namespace="calico-system" Pod="calico-kube-controllers-894f9f8d4-b5lgh" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-calico--kube--controllers--894f9f8d4--b5lgh-eth0" Jan 17 00:07:49.400112 containerd[1672]: 2026-01-17 00:07:49.359 [INFO][5122] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b4cb5c82385090c5ce990a7dc7baf64e5f8e7a3972c7e3d5172ec19bc844939f" Namespace="calico-system" Pod="calico-kube-controllers-894f9f8d4-b5lgh" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-calico--kube--controllers--894f9f8d4--b5lgh-eth0" Jan 17 00:07:49.400112 containerd[1672]: 2026-01-17 00:07:49.361 [INFO][5122] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b4cb5c82385090c5ce990a7dc7baf64e5f8e7a3972c7e3d5172ec19bc844939f" Namespace="calico-system" Pod="calico-kube-controllers-894f9f8d4-b5lgh" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-calico--kube--controllers--894f9f8d4--b5lgh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4c16a83c6c-k8s-calico--kube--controllers--894f9f8d4--b5lgh-eth0", GenerateName:"calico-kube-controllers-894f9f8d4-", Namespace:"calico-system", SelfLink:"", UID:"f59d9319-e335-4bfc-a026-d8bbe3696e81", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 7, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"894f9f8d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4c16a83c6c", ContainerID:"b4cb5c82385090c5ce990a7dc7baf64e5f8e7a3972c7e3d5172ec19bc844939f", Pod:"calico-kube-controllers-894f9f8d4-b5lgh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali73bfb212120", MAC:"32:9e:5a:df:de:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:07:49.400112 containerd[1672]: 2026-01-17 00:07:49.392 [INFO][5122] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b4cb5c82385090c5ce990a7dc7baf64e5f8e7a3972c7e3d5172ec19bc844939f" Namespace="calico-system" Pod="calico-kube-controllers-894f9f8d4-b5lgh" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-calico--kube--controllers--894f9f8d4--b5lgh-eth0" Jan 17 00:07:49.408734 containerd[1672]: time="2026-01-17T00:07:49.408351553Z" level=info msg="CreateContainer within sandbox \"2fadd6c6910dacdb6e70d1ac98a04df5a4f4f62ab0418a5ec40f1be93c0d9db2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:07:49.439423 containerd[1672]: time="2026-01-17T00:07:49.439322436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:07:49.439423 containerd[1672]: time="2026-01-17T00:07:49.439382996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:07:49.439423 containerd[1672]: time="2026-01-17T00:07:49.439402076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:07:49.439694 containerd[1672]: time="2026-01-17T00:07:49.439513036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:07:49.448295 containerd[1672]: time="2026-01-17T00:07:49.448005808Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:07:49.456231 systemd[1]: Started cri-containerd-b4cb5c82385090c5ce990a7dc7baf64e5f8e7a3972c7e3d5172ec19bc844939f.scope - libcontainer container b4cb5c82385090c5ce990a7dc7baf64e5f8e7a3972c7e3d5172ec19bc844939f. Jan 17 00:07:49.493319 containerd[1672]: time="2026-01-17T00:07:49.493275309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-894f9f8d4-b5lgh,Uid:f59d9319-e335-4bfc-a026-d8bbe3696e81,Namespace:calico-system,Attempt:1,} returns sandbox id \"b4cb5c82385090c5ce990a7dc7baf64e5f8e7a3972c7e3d5172ec19bc844939f\"" Jan 17 00:07:49.522724 containerd[1672]: time="2026-01-17T00:07:49.522664909Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:07:49.522904 containerd[1672]: time="2026-01-17T00:07:49.522781709Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:07:49.523002 kubelet[3141]: E0117 00:07:49.522961 3141 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:07:49.523075 kubelet[3141]: E0117 00:07:49.523019 3141 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:07:49.523725 containerd[1672]: time="2026-01-17T00:07:49.523368430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:07:49.523844 kubelet[3141]: E0117 00:07:49.523601 3141 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-57d7d85589-ght5v_calico-apiserver(bcc0dcb5-6cc0-4aca-b131-0866d93b8e20): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:07:49.523844 kubelet[3141]: E0117 00:07:49.523663 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57d7d85589-ght5v" podUID="bcc0dcb5-6cc0-4aca-b131-0866d93b8e20" Jan 17 00:07:49.679403 containerd[1672]: time="2026-01-17T00:07:49.679190882Z" level=info msg="StopPodSandbox for \"11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61\"" Jan 17 00:07:49.682665 containerd[1672]: time="2026-01-17T00:07:49.679334603Z" level=info msg="StopPodSandbox for \"104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7\"" Jan 17 00:07:49.796270 containerd[1672]: 2026-01-17 00:07:49.753 [INFO][5261] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" Jan 17 00:07:49.796270 containerd[1672]: 2026-01-17 00:07:49.754 [INFO][5261] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" iface="eth0" netns="/var/run/netns/cni-7a5d320c-8a29-8e17-5ea0-2c9b2391f1ae" Jan 17 00:07:49.796270 containerd[1672]: 2026-01-17 00:07:49.754 [INFO][5261] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" iface="eth0" netns="/var/run/netns/cni-7a5d320c-8a29-8e17-5ea0-2c9b2391f1ae" Jan 17 00:07:49.796270 containerd[1672]: 2026-01-17 00:07:49.755 [INFO][5261] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" iface="eth0" netns="/var/run/netns/cni-7a5d320c-8a29-8e17-5ea0-2c9b2391f1ae" Jan 17 00:07:49.796270 containerd[1672]: 2026-01-17 00:07:49.755 [INFO][5261] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" Jan 17 00:07:49.796270 containerd[1672]: 2026-01-17 00:07:49.755 [INFO][5261] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" Jan 17 00:07:49.796270 containerd[1672]: 2026-01-17 00:07:49.780 [INFO][5273] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" HandleID="k8s-pod-network.104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--m66dw-eth0" Jan 17 00:07:49.796270 containerd[1672]: 2026-01-17 00:07:49.780 [INFO][5273] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:07:49.796270 containerd[1672]: 2026-01-17 00:07:49.780 [INFO][5273] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:07:49.796270 containerd[1672]: 2026-01-17 00:07:49.791 [WARNING][5273] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" HandleID="k8s-pod-network.104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--m66dw-eth0" Jan 17 00:07:49.796270 containerd[1672]: 2026-01-17 00:07:49.791 [INFO][5273] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" HandleID="k8s-pod-network.104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--m66dw-eth0" Jan 17 00:07:49.796270 containerd[1672]: 2026-01-17 00:07:49.793 [INFO][5273] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:07:49.796270 containerd[1672]: 2026-01-17 00:07:49.794 [INFO][5261] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" Jan 17 00:07:49.797132 containerd[1672]: time="2026-01-17T00:07:49.796977203Z" level=info msg="TearDown network for sandbox \"104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7\" successfully" Jan 17 00:07:49.797132 containerd[1672]: time="2026-01-17T00:07:49.797008523Z" level=info msg="StopPodSandbox for \"104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7\" returns successfully" Jan 17 00:07:49.808496 containerd[1672]: 2026-01-17 00:07:49.749 [INFO][5253] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" Jan 17 00:07:49.808496 containerd[1672]: 2026-01-17 00:07:49.749 [INFO][5253] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" iface="eth0" netns="/var/run/netns/cni-a3a85061-f626-41bc-81f0-a37198e01c54" Jan 17 00:07:49.808496 containerd[1672]: 2026-01-17 00:07:49.749 [INFO][5253] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" iface="eth0" netns="/var/run/netns/cni-a3a85061-f626-41bc-81f0-a37198e01c54" Jan 17 00:07:49.808496 containerd[1672]: 2026-01-17 00:07:49.750 [INFO][5253] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" iface="eth0" netns="/var/run/netns/cni-a3a85061-f626-41bc-81f0-a37198e01c54" Jan 17 00:07:49.808496 containerd[1672]: 2026-01-17 00:07:49.750 [INFO][5253] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" Jan 17 00:07:49.808496 containerd[1672]: 2026-01-17 00:07:49.750 [INFO][5253] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" Jan 17 00:07:49.808496 containerd[1672]: 2026-01-17 00:07:49.786 [INFO][5271] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" HandleID="k8s-pod-network.11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-goldmane--7c778bb748--hg9nz-eth0" Jan 17 00:07:49.808496 containerd[1672]: 2026-01-17 00:07:49.786 [INFO][5271] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:07:49.808496 containerd[1672]: 2026-01-17 00:07:49.793 [INFO][5271] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:07:49.808496 containerd[1672]: 2026-01-17 00:07:49.802 [WARNING][5271] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" HandleID="k8s-pod-network.11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-goldmane--7c778bb748--hg9nz-eth0" Jan 17 00:07:49.808496 containerd[1672]: 2026-01-17 00:07:49.802 [INFO][5271] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" HandleID="k8s-pod-network.11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-goldmane--7c778bb748--hg9nz-eth0" Jan 17 00:07:49.808496 containerd[1672]: 2026-01-17 00:07:49.803 [INFO][5271] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:07:49.808496 containerd[1672]: 2026-01-17 00:07:49.805 [INFO][5253] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" Jan 17 00:07:49.809074 containerd[1672]: time="2026-01-17T00:07:49.808908419Z" level=info msg="TearDown network for sandbox \"11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61\" successfully" Jan 17 00:07:49.809074 containerd[1672]: time="2026-01-17T00:07:49.808941339Z" level=info msg="StopPodSandbox for \"11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61\" returns successfully" Jan 17 00:07:49.812166 systemd-networkd[1306]: cali038a4ac797f: Gained IPv6LL Jan 17 00:07:49.814105 containerd[1672]: time="2026-01-17T00:07:49.814071586Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:07:49.899599 systemd[1]: run-netns-cni\x2da3a85061\x2df626\x2d41bc\x2d81f0\x2da37198e01c54.mount: Deactivated successfully. Jan 17 00:07:49.899685 systemd[1]: run-netns-cni\x2d7a5d320c\x2d8a29\x2d8e17\x2d5ea0\x2d2c9b2391f1ae.mount: Deactivated successfully. Jan 17 00:07:49.964776 kubelet[3141]: E0117 00:07:49.963140 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57d7d85589-mrl7f" podUID="4d9310f4-1124-495b-a411-5323618ddd1d" Jan 17 00:07:49.964776 kubelet[3141]: E0117 00:07:49.963470 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57d7d85589-ght5v" podUID="bcc0dcb5-6cc0-4aca-b131-0866d93b8e20" Jan 17 00:07:50.323610 systemd-networkd[1306]: calif6fe12a186f: Gained IPv6LL Jan 17 00:07:50.579223 systemd-networkd[1306]: calida7d340b51f: Gained IPv6LL Jan 17 00:07:50.835207 systemd-networkd[1306]: cali73bfb212120: Gained IPv6LL Jan 17 00:07:51.219220 systemd-networkd[1306]: caliba2a4c5921c: Gained IPv6LL Jan 17 00:07:52.382660 containerd[1672]: time="2026-01-17T00:07:52.382484645Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:07:52.382660 containerd[1672]: time="2026-01-17T00:07:52.382599046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:07:52.622116 containerd[1672]: time="2026-01-17T00:07:52.383553887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:07:52.622159 kubelet[3141]: E0117 00:07:52.382904 3141 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:07:52.622159 kubelet[3141]: E0117 00:07:52.382948 3141 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:07:52.622159 kubelet[3141]: E0117 00:07:52.383129 3141 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-v4lqg_calico-system(b1f66b76-7db3-449d-92fa-faa5ceccc08b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:07:52.622480 containerd[1672]: time="2026-01-17T00:07:52.622326812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-m66dw,Uid:0ad856fe-523a-4a16-bb22-1a01d08264e2,Namespace:kube-system,Attempt:1,}" Jan 17 00:07:53.085384 containerd[1672]: time="2026-01-17T00:07:53.085332483Z" level=info msg="CreateContainer within sandbox \"2fadd6c6910dacdb6e70d1ac98a04df5a4f4f62ab0418a5ec40f1be93c0d9db2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2fc2b5cd26857f2f0f4e18e29e5ed12ea4a3511c191eb250844517fd8e2c67da\"" Jan 17 00:07:53.128974 containerd[1672]: time="2026-01-17T00:07:53.087377326Z" level=info msg="StartContainer for \"2fc2b5cd26857f2f0f4e18e29e5ed12ea4a3511c191eb250844517fd8e2c67da\"" Jan 17 00:07:53.131424 containerd[1672]: time="2026-01-17T00:07:53.131386546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-hg9nz,Uid:92047ce3-1e28-4b15-bb95-00e4947b1fab,Namespace:calico-system,Attempt:1,}" Jan 17 00:07:53.159219 systemd[1]: Started cri-containerd-2fc2b5cd26857f2f0f4e18e29e5ed12ea4a3511c191eb250844517fd8e2c67da.scope - libcontainer container 2fc2b5cd26857f2f0f4e18e29e5ed12ea4a3511c191eb250844517fd8e2c67da. Jan 17 00:07:53.319365 containerd[1672]: time="2026-01-17T00:07:53.319067721Z" level=info msg="StartContainer for \"2fc2b5cd26857f2f0f4e18e29e5ed12ea4a3511c191eb250844517fd8e2c67da\" returns successfully" Jan 17 00:07:53.477574 containerd[1672]: time="2026-01-17T00:07:53.477431617Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:07:53.619856 containerd[1672]: time="2026-01-17T00:07:53.619802211Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:07:53.620216 containerd[1672]: time="2026-01-17T00:07:53.619917571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:07:53.620274 kubelet[3141]: E0117 00:07:53.620077 3141 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:07:53.620274 kubelet[3141]: E0117 00:07:53.620120 3141 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:07:53.620349 kubelet[3141]: E0117 00:07:53.620309 3141 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-894f9f8d4-b5lgh_calico-system(f59d9319-e335-4bfc-a026-d8bbe3696e81): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:07:53.620403 kubelet[3141]: E0117 00:07:53.620347 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-894f9f8d4-b5lgh" podUID="f59d9319-e335-4bfc-a026-d8bbe3696e81" Jan 17 00:07:53.621058 containerd[1672]: time="2026-01-17T00:07:53.620804493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:07:53.792702 systemd-networkd[1306]: cali456f4fb461b: Link UP Jan 17 00:07:53.795269 systemd-networkd[1306]: cali456f4fb461b: Gained carrier Jan 17 00:07:53.820469 containerd[1672]: 2026-01-17 00:07:53.723 [INFO][5334] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--m66dw-eth0 coredns-66bc5c9577- kube-system 0ad856fe-523a-4a16-bb22-1a01d08264e2 996 0 2026-01-17 00:06:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-4c16a83c6c coredns-66bc5c9577-m66dw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali456f4fb461b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="c38dfcb1a1631ff2e0612ed363efad00d38918ed2a1354f777da3d673899d94a" Namespace="kube-system" Pod="coredns-66bc5c9577-m66dw" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--m66dw-" Jan 17 00:07:53.820469 containerd[1672]: 2026-01-17 00:07:53.723 [INFO][5334] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c38dfcb1a1631ff2e0612ed363efad00d38918ed2a1354f777da3d673899d94a" Namespace="kube-system" Pod="coredns-66bc5c9577-m66dw" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--m66dw-eth0" Jan 17 00:07:53.820469 containerd[1672]: 2026-01-17 00:07:53.749 [INFO][5345] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c38dfcb1a1631ff2e0612ed363efad00d38918ed2a1354f777da3d673899d94a" HandleID="k8s-pod-network.c38dfcb1a1631ff2e0612ed363efad00d38918ed2a1354f777da3d673899d94a" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--m66dw-eth0" Jan 17 00:07:53.820469 containerd[1672]: 2026-01-17 00:07:53.749 [INFO][5345] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c38dfcb1a1631ff2e0612ed363efad00d38918ed2a1354f777da3d673899d94a" HandleID="k8s-pod-network.c38dfcb1a1631ff2e0612ed363efad00d38918ed2a1354f777da3d673899d94a" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--m66dw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c9010), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-4c16a83c6c", "pod":"coredns-66bc5c9577-m66dw", "timestamp":"2026-01-17 00:07:53.749055787 +0000 UTC"}, Hostname:"ci-4081.3.6-n-4c16a83c6c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:07:53.820469 containerd[1672]: 2026-01-17 00:07:53.749 [INFO][5345] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:07:53.820469 containerd[1672]: 2026-01-17 00:07:53.749 [INFO][5345] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:07:53.820469 containerd[1672]: 2026-01-17 00:07:53.749 [INFO][5345] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-4c16a83c6c' Jan 17 00:07:53.820469 containerd[1672]: 2026-01-17 00:07:53.758 [INFO][5345] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c38dfcb1a1631ff2e0612ed363efad00d38918ed2a1354f777da3d673899d94a" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:53.820469 containerd[1672]: 2026-01-17 00:07:53.762 [INFO][5345] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:53.820469 containerd[1672]: 2026-01-17 00:07:53.765 [INFO][5345] ipam/ipam.go 511: Trying affinity for 192.168.12.64/26 host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:53.820469 containerd[1672]: 2026-01-17 00:07:53.767 [INFO][5345] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.64/26 host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:53.820469 containerd[1672]: 2026-01-17 00:07:53.769 [INFO][5345] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.64/26 host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:53.820469 containerd[1672]: 2026-01-17 00:07:53.769 [INFO][5345] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.64/26 handle="k8s-pod-network.c38dfcb1a1631ff2e0612ed363efad00d38918ed2a1354f777da3d673899d94a" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:53.820469 containerd[1672]: 2026-01-17 00:07:53.770 [INFO][5345] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c38dfcb1a1631ff2e0612ed363efad00d38918ed2a1354f777da3d673899d94a Jan 17 00:07:53.820469 containerd[1672]: 2026-01-17 00:07:53.776 [INFO][5345] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.64/26 handle="k8s-pod-network.c38dfcb1a1631ff2e0612ed363efad00d38918ed2a1354f777da3d673899d94a" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:53.820469 containerd[1672]: 2026-01-17 00:07:53.785 [INFO][5345] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.71/26] block=192.168.12.64/26 handle="k8s-pod-network.c38dfcb1a1631ff2e0612ed363efad00d38918ed2a1354f777da3d673899d94a" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:53.820469 containerd[1672]: 2026-01-17 00:07:53.786 [INFO][5345] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.71/26] handle="k8s-pod-network.c38dfcb1a1631ff2e0612ed363efad00d38918ed2a1354f777da3d673899d94a" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:53.820469 containerd[1672]: 2026-01-17 00:07:53.786 [INFO][5345] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:07:53.820469 containerd[1672]: 2026-01-17 00:07:53.786 [INFO][5345] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.71/26] IPv6=[] ContainerID="c38dfcb1a1631ff2e0612ed363efad00d38918ed2a1354f777da3d673899d94a" HandleID="k8s-pod-network.c38dfcb1a1631ff2e0612ed363efad00d38918ed2a1354f777da3d673899d94a" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--m66dw-eth0" Jan 17 00:07:53.821661 containerd[1672]: 2026-01-17 00:07:53.788 [INFO][5334] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c38dfcb1a1631ff2e0612ed363efad00d38918ed2a1354f777da3d673899d94a" Namespace="kube-system" Pod="coredns-66bc5c9577-m66dw" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--m66dw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--m66dw-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0ad856fe-523a-4a16-bb22-1a01d08264e2", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 6, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4c16a83c6c", ContainerID:"", Pod:"coredns-66bc5c9577-m66dw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali456f4fb461b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:07:53.821661 containerd[1672]: 2026-01-17 00:07:53.788 [INFO][5334] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.71/32] ContainerID="c38dfcb1a1631ff2e0612ed363efad00d38918ed2a1354f777da3d673899d94a" Namespace="kube-system" Pod="coredns-66bc5c9577-m66dw" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--m66dw-eth0" Jan 17 00:07:53.821661 containerd[1672]: 2026-01-17 00:07:53.788 [INFO][5334] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali456f4fb461b ContainerID="c38dfcb1a1631ff2e0612ed363efad00d38918ed2a1354f777da3d673899d94a" Namespace="kube-system" Pod="coredns-66bc5c9577-m66dw" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--m66dw-eth0" Jan 17 00:07:53.821661 containerd[1672]: 2026-01-17 00:07:53.795 [INFO][5334] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c38dfcb1a1631ff2e0612ed363efad00d38918ed2a1354f777da3d673899d94a" Namespace="kube-system" Pod="coredns-66bc5c9577-m66dw" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--m66dw-eth0" Jan 17 00:07:53.821661 containerd[1672]: 2026-01-17 00:07:53.798 [INFO][5334] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c38dfcb1a1631ff2e0612ed363efad00d38918ed2a1354f777da3d673899d94a" Namespace="kube-system" Pod="coredns-66bc5c9577-m66dw" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--m66dw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--m66dw-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0ad856fe-523a-4a16-bb22-1a01d08264e2", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 6, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4c16a83c6c", ContainerID:"c38dfcb1a1631ff2e0612ed363efad00d38918ed2a1354f777da3d673899d94a", Pod:"coredns-66bc5c9577-m66dw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali456f4fb461b", MAC:"2e:a4:19:31:e9:c0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:07:53.822073 containerd[1672]: 2026-01-17 00:07:53.817 [INFO][5334] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c38dfcb1a1631ff2e0612ed363efad00d38918ed2a1354f777da3d673899d94a" Namespace="kube-system" Pod="coredns-66bc5c9577-m66dw" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--m66dw-eth0" Jan 17 00:07:53.847930 containerd[1672]: time="2026-01-17T00:07:53.847786282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:07:53.847930 containerd[1672]: time="2026-01-17T00:07:53.847860522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:07:53.847930 containerd[1672]: time="2026-01-17T00:07:53.847883642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:07:53.848295 containerd[1672]: time="2026-01-17T00:07:53.848228202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:07:53.866217 systemd[1]: Started cri-containerd-c38dfcb1a1631ff2e0612ed363efad00d38918ed2a1354f777da3d673899d94a.scope - libcontainer container c38dfcb1a1631ff2e0612ed363efad00d38918ed2a1354f777da3d673899d94a. Jan 17 00:07:53.895904 containerd[1672]: time="2026-01-17T00:07:53.895863867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-m66dw,Uid:0ad856fe-523a-4a16-bb22-1a01d08264e2,Namespace:kube-system,Attempt:1,} returns sandbox id \"c38dfcb1a1631ff2e0612ed363efad00d38918ed2a1354f777da3d673899d94a\"" Jan 17 00:07:53.924530 containerd[1672]: time="2026-01-17T00:07:53.924396666Z" level=info msg="CreateContainer within sandbox \"c38dfcb1a1631ff2e0612ed363efad00d38918ed2a1354f777da3d673899d94a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:07:53.975944 kubelet[3141]: E0117 00:07:53.975794 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-894f9f8d4-b5lgh" podUID="f59d9319-e335-4bfc-a026-d8bbe3696e81" Jan 17 00:07:54.016887 kubelet[3141]: I0117 00:07:54.016718 3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-xzzqx" podStartSLOduration=62.016699592 podStartE2EDuration="1m2.016699592s" podCreationTimestamp="2026-01-17 00:06:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:07:54.014663029 +0000 UTC m=+67.422517550" watchObservedRunningTime="2026-01-17 00:07:54.016699592 +0000 UTC m=+67.424554073" Jan 17 00:07:54.051067 systemd-networkd[1306]: calidbc8046b587: Link UP Jan 17 00:07:54.052407 systemd-networkd[1306]: calidbc8046b587: Gained carrier Jan 17 00:07:54.077830 containerd[1672]: 2026-01-17 00:07:53.953 [INFO][5402] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--4c16a83c6c-k8s-goldmane--7c778bb748--hg9nz-eth0 goldmane-7c778bb748- calico-system 92047ce3-1e28-4b15-bb95-00e4947b1fab 995 0 2026-01-17 00:07:08 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-4c16a83c6c goldmane-7c778bb748-hg9nz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calidbc8046b587 [] [] }} ContainerID="bb8a4a185bba05d035b798913977d2597d8ef4a46b86130643c3b781c9615be4" Namespace="calico-system" Pod="goldmane-7c778bb748-hg9nz" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-goldmane--7c778bb748--hg9nz-" Jan 17 00:07:54.077830 containerd[1672]: 2026-01-17 00:07:53.953 [INFO][5402] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bb8a4a185bba05d035b798913977d2597d8ef4a46b86130643c3b781c9615be4" Namespace="calico-system" Pod="goldmane-7c778bb748-hg9nz" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-goldmane--7c778bb748--hg9nz-eth0" Jan 17 00:07:54.077830 containerd[1672]: 2026-01-17 00:07:53.985 [INFO][5416] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bb8a4a185bba05d035b798913977d2597d8ef4a46b86130643c3b781c9615be4" HandleID="k8s-pod-network.bb8a4a185bba05d035b798913977d2597d8ef4a46b86130643c3b781c9615be4" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-goldmane--7c778bb748--hg9nz-eth0" Jan 17 00:07:54.077830 containerd[1672]: 2026-01-17 00:07:53.985 [INFO][5416] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bb8a4a185bba05d035b798913977d2597d8ef4a46b86130643c3b781c9615be4" HandleID="k8s-pod-network.bb8a4a185bba05d035b798913977d2597d8ef4a46b86130643c3b781c9615be4" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-goldmane--7c778bb748--hg9nz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-4c16a83c6c", "pod":"goldmane-7c778bb748-hg9nz", "timestamp":"2026-01-17 00:07:53.98579379 +0000 UTC"}, Hostname:"ci-4081.3.6-n-4c16a83c6c", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:07:54.077830 containerd[1672]: 2026-01-17 00:07:53.986 [INFO][5416] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:07:54.077830 containerd[1672]: 2026-01-17 00:07:53.986 [INFO][5416] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:07:54.077830 containerd[1672]: 2026-01-17 00:07:53.986 [INFO][5416] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-4c16a83c6c' Jan 17 00:07:54.077830 containerd[1672]: 2026-01-17 00:07:53.996 [INFO][5416] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bb8a4a185bba05d035b798913977d2597d8ef4a46b86130643c3b781c9615be4" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:54.077830 containerd[1672]: 2026-01-17 00:07:54.004 [INFO][5416] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:54.077830 containerd[1672]: 2026-01-17 00:07:54.008 [INFO][5416] ipam/ipam.go 511: Trying affinity for 192.168.12.64/26 host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:54.077830 containerd[1672]: 2026-01-17 00:07:54.013 [INFO][5416] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.64/26 host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:54.077830 containerd[1672]: 2026-01-17 00:07:54.020 [INFO][5416] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.64/26 host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:54.077830 containerd[1672]: 2026-01-17 00:07:54.020 [INFO][5416] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.64/26 handle="k8s-pod-network.bb8a4a185bba05d035b798913977d2597d8ef4a46b86130643c3b781c9615be4" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:54.077830 containerd[1672]: 2026-01-17 00:07:54.023 [INFO][5416] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bb8a4a185bba05d035b798913977d2597d8ef4a46b86130643c3b781c9615be4 Jan 17 00:07:54.077830 containerd[1672]: 2026-01-17 00:07:54.028 [INFO][5416] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.64/26 handle="k8s-pod-network.bb8a4a185bba05d035b798913977d2597d8ef4a46b86130643c3b781c9615be4" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:54.077830 containerd[1672]: 2026-01-17 00:07:54.042 [INFO][5416] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.72/26] block=192.168.12.64/26 handle="k8s-pod-network.bb8a4a185bba05d035b798913977d2597d8ef4a46b86130643c3b781c9615be4" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:54.077830 containerd[1672]: 2026-01-17 00:07:54.042 [INFO][5416] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.72/26] handle="k8s-pod-network.bb8a4a185bba05d035b798913977d2597d8ef4a46b86130643c3b781c9615be4" host="ci-4081.3.6-n-4c16a83c6c" Jan 17 00:07:54.077830 containerd[1672]: 2026-01-17 00:07:54.043 [INFO][5416] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:07:54.077830 containerd[1672]: 2026-01-17 00:07:54.043 [INFO][5416] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.72/26] IPv6=[] ContainerID="bb8a4a185bba05d035b798913977d2597d8ef4a46b86130643c3b781c9615be4" HandleID="k8s-pod-network.bb8a4a185bba05d035b798913977d2597d8ef4a46b86130643c3b781c9615be4" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-goldmane--7c778bb748--hg9nz-eth0" Jan 17 00:07:54.081548 containerd[1672]: 2026-01-17 00:07:54.046 [INFO][5402] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bb8a4a185bba05d035b798913977d2597d8ef4a46b86130643c3b781c9615be4" Namespace="calico-system" Pod="goldmane-7c778bb748-hg9nz" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-goldmane--7c778bb748--hg9nz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4c16a83c6c-k8s-goldmane--7c778bb748--hg9nz-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"92047ce3-1e28-4b15-bb95-00e4947b1fab", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 7, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4c16a83c6c", ContainerID:"", Pod:"goldmane-7c778bb748-hg9nz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.12.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidbc8046b587", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:07:54.081548 containerd[1672]: 2026-01-17 00:07:54.047 [INFO][5402] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.72/32] ContainerID="bb8a4a185bba05d035b798913977d2597d8ef4a46b86130643c3b781c9615be4" Namespace="calico-system" Pod="goldmane-7c778bb748-hg9nz" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-goldmane--7c778bb748--hg9nz-eth0" Jan 17 00:07:54.081548 containerd[1672]: 2026-01-17 00:07:54.047 [INFO][5402] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidbc8046b587 ContainerID="bb8a4a185bba05d035b798913977d2597d8ef4a46b86130643c3b781c9615be4" Namespace="calico-system" Pod="goldmane-7c778bb748-hg9nz" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-goldmane--7c778bb748--hg9nz-eth0" Jan 17 00:07:54.081548 containerd[1672]: 2026-01-17 00:07:54.051 [INFO][5402] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bb8a4a185bba05d035b798913977d2597d8ef4a46b86130643c3b781c9615be4" Namespace="calico-system" Pod="goldmane-7c778bb748-hg9nz" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-goldmane--7c778bb748--hg9nz-eth0" Jan 17 00:07:54.081548 containerd[1672]: 2026-01-17 00:07:54.053 [INFO][5402] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bb8a4a185bba05d035b798913977d2597d8ef4a46b86130643c3b781c9615be4" Namespace="calico-system" Pod="goldmane-7c778bb748-hg9nz" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-goldmane--7c778bb748--hg9nz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4c16a83c6c-k8s-goldmane--7c778bb748--hg9nz-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"92047ce3-1e28-4b15-bb95-00e4947b1fab", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 7, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4c16a83c6c", ContainerID:"bb8a4a185bba05d035b798913977d2597d8ef4a46b86130643c3b781c9615be4", Pod:"goldmane-7c778bb748-hg9nz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.12.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidbc8046b587", MAC:"1a:d3:53:2e:21:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:07:54.081548 containerd[1672]: 2026-01-17 00:07:54.072 [INFO][5402] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bb8a4a185bba05d035b798913977d2597d8ef4a46b86130643c3b781c9615be4" Namespace="calico-system" Pod="goldmane-7c778bb748-hg9nz" WorkloadEndpoint="ci--4081.3.6--n--4c16a83c6c-k8s-goldmane--7c778bb748--hg9nz-eth0" Jan 17 00:07:54.090748 containerd[1672]: time="2026-01-17T00:07:54.090701213Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:07:54.162945 containerd[1672]: time="2026-01-17T00:07:54.161408869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:07:54.162945 containerd[1672]: time="2026-01-17T00:07:54.161496509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:07:54.162945 containerd[1672]: time="2026-01-17T00:07:54.161511749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:07:54.162945 containerd[1672]: time="2026-01-17T00:07:54.161605629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:07:54.180064 containerd[1672]: time="2026-01-17T00:07:54.179968854Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:07:54.180464 kubelet[3141]: E0117 00:07:54.180375 3141 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:07:54.180464 kubelet[3141]: E0117 00:07:54.180426 3141 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:07:54.183179 kubelet[3141]: E0117 00:07:54.180516 3141 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-v4lqg_calico-system(b1f66b76-7db3-449d-92fa-faa5ceccc08b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:07:54.183179 kubelet[3141]: E0117 00:07:54.180556 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v4lqg" podUID="b1f66b76-7db3-449d-92fa-faa5ceccc08b" Jan 17 00:07:54.183288 containerd[1672]: time="2026-01-17T00:07:54.180805856Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:07:54.203967 systemd[1]: Started cri-containerd-bb8a4a185bba05d035b798913977d2597d8ef4a46b86130643c3b781c9615be4.scope - libcontainer container bb8a4a185bba05d035b798913977d2597d8ef4a46b86130643c3b781c9615be4. Jan 17 00:07:54.233016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3870375774.mount: Deactivated successfully. Jan 17 00:07:54.309342 containerd[1672]: time="2026-01-17T00:07:54.308754430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-hg9nz,Uid:92047ce3-1e28-4b15-bb95-00e4947b1fab,Namespace:calico-system,Attempt:1,} returns sandbox id \"bb8a4a185bba05d035b798913977d2597d8ef4a46b86130643c3b781c9615be4\"" Jan 17 00:07:54.312308 containerd[1672]: time="2026-01-17T00:07:54.312269675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:07:54.431500 containerd[1672]: time="2026-01-17T00:07:54.431456197Z" level=info msg="CreateContainer within sandbox \"c38dfcb1a1631ff2e0612ed363efad00d38918ed2a1354f777da3d673899d94a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4b5678ea6a1986ad3be415e010bb27b73895ba22622579a2b90fe3b1535b671f\"" Jan 17 00:07:54.432172 containerd[1672]: time="2026-01-17T00:07:54.431965518Z" level=info msg="StartContainer for \"4b5678ea6a1986ad3be415e010bb27b73895ba22622579a2b90fe3b1535b671f\"" Jan 17 00:07:54.459836 systemd[1]: Started cri-containerd-4b5678ea6a1986ad3be415e010bb27b73895ba22622579a2b90fe3b1535b671f.scope - libcontainer container 4b5678ea6a1986ad3be415e010bb27b73895ba22622579a2b90fe3b1535b671f. Jan 17 00:07:54.496028 containerd[1672]: time="2026-01-17T00:07:54.495984445Z" level=info msg="StartContainer for \"4b5678ea6a1986ad3be415e010bb27b73895ba22622579a2b90fe3b1535b671f\" returns successfully" Jan 17 00:07:54.639172 containerd[1672]: time="2026-01-17T00:07:54.639023640Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:07:54.981766 kubelet[3141]: E0117 00:07:54.981550 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v4lqg" podUID="b1f66b76-7db3-449d-92fa-faa5ceccc08b" Jan 17 00:07:55.021538 kubelet[3141]: I0117 00:07:55.021478 3141 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-m66dw" podStartSLOduration=63.021463361 podStartE2EDuration="1m3.021463361s" podCreationTimestamp="2026-01-17 00:06:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:07:55.0205192 +0000 UTC m=+68.428373721" watchObservedRunningTime="2026-01-17 00:07:55.021463361 +0000 UTC m=+68.429317882" Jan 17 00:07:55.187230 systemd-networkd[1306]: cali456f4fb461b: Gained IPv6LL Jan 17 00:07:55.763309 systemd-networkd[1306]: calidbc8046b587: Gained IPv6LL Jan 17 00:07:56.674070 containerd[1672]: time="2026-01-17T00:07:56.673993012Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:07:56.674468 containerd[1672]: time="2026-01-17T00:07:56.674051412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:07:56.674499 kubelet[3141]: E0117 00:07:56.674308 3141 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:07:56.674499 kubelet[3141]: E0117 00:07:56.674371 3141 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:07:56.674718 kubelet[3141]: E0117 00:07:56.674565 3141 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-hg9nz_calico-system(92047ce3-1e28-4b15-bb95-00e4947b1fab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:07:56.674718 kubelet[3141]: E0117 00:07:56.674601 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-hg9nz" podUID="92047ce3-1e28-4b15-bb95-00e4947b1fab" Jan 17 00:07:56.984876 kubelet[3141]: E0117 00:07:56.984471 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-hg9nz" podUID="92047ce3-1e28-4b15-bb95-00e4947b1fab" Jan 17 00:07:59.679709 containerd[1672]: time="2026-01-17T00:07:59.679667234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:07:59.972758 containerd[1672]: time="2026-01-17T00:07:59.972634617Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:07:59.976112 containerd[1672]: time="2026-01-17T00:07:59.976071502Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:07:59.976197 containerd[1672]: time="2026-01-17T00:07:59.976170462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:07:59.976344 kubelet[3141]: E0117 00:07:59.976311 3141 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:07:59.977346 kubelet[3141]: E0117 00:07:59.976357 3141 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:07:59.977346 kubelet[3141]: E0117 00:07:59.976438 3141 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-fdd85dd66-d8cmt_calico-system(f0b7656b-346b-4c7a-84f5-6afacf5c8b98): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:07:59.978222 containerd[1672]: time="2026-01-17T00:07:59.977880144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:08:00.237023 containerd[1672]: time="2026-01-17T00:08:00.236765323Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:08:00.239591 containerd[1672]: time="2026-01-17T00:08:00.239487006Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:08:00.239591 containerd[1672]: time="2026-01-17T00:08:00.239555286Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:08:00.239747 kubelet[3141]: E0117 00:08:00.239692 3141 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:08:00.239747 kubelet[3141]: E0117 00:08:00.239733 3141 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:08:00.239909 kubelet[3141]: E0117 00:08:00.239801 3141 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-fdd85dd66-d8cmt_calico-system(f0b7656b-346b-4c7a-84f5-6afacf5c8b98): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:08:00.239909 kubelet[3141]: E0117 00:08:00.239843 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-fdd85dd66-d8cmt" podUID="f0b7656b-346b-4c7a-84f5-6afacf5c8b98" Jan 17 00:08:00.681657 containerd[1672]: time="2026-01-17T00:08:00.681088064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:08:00.924565 containerd[1672]: time="2026-01-17T00:08:00.924515183Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:08:00.927501 containerd[1672]: time="2026-01-17T00:08:00.927464707Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:08:00.927589 containerd[1672]: time="2026-01-17T00:08:00.927561387Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:08:00.927770 kubelet[3141]: E0117 00:08:00.927730 3141 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:08:00.927867 kubelet[3141]: E0117 00:08:00.927781 3141 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:08:00.928316 kubelet[3141]: E0117 00:08:00.927875 3141 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-57d7d85589-ght5v_calico-apiserver(bcc0dcb5-6cc0-4aca-b131-0866d93b8e20): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:08:00.928316 kubelet[3141]: E0117 00:08:00.927909 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57d7d85589-ght5v" podUID="bcc0dcb5-6cc0-4aca-b131-0866d93b8e20" Jan 17 00:08:01.679967 containerd[1672]: time="2026-01-17T00:08:01.679792492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:08:01.920470 containerd[1672]: time="2026-01-17T00:08:01.920377127Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:08:01.923060 containerd[1672]: time="2026-01-17T00:08:01.922856410Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:08:01.923060 containerd[1672]: time="2026-01-17T00:08:01.922908650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:08:01.923161 kubelet[3141]: E0117 00:08:01.923107 3141 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:08:01.923813 kubelet[3141]: E0117 00:08:01.923161 3141 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:08:01.923813 kubelet[3141]: E0117 00:08:01.923248 3141 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-57d7d85589-mrl7f_calico-apiserver(4d9310f4-1124-495b-a411-5323618ddd1d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:08:01.923813 kubelet[3141]: E0117 00:08:01.923295 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57d7d85589-mrl7f" podUID="4d9310f4-1124-495b-a411-5323618ddd1d" Jan 17 00:08:06.679593 containerd[1672]: time="2026-01-17T00:08:06.679230523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:08:06.936123 containerd[1672]: time="2026-01-17T00:08:06.935982340Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:08:06.938544 containerd[1672]: time="2026-01-17T00:08:06.938488944Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:08:06.938667 containerd[1672]: time="2026-01-17T00:08:06.938593504Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:08:06.939620 kubelet[3141]: E0117 00:08:06.938830 3141 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:08:06.939620 kubelet[3141]: E0117 00:08:06.938885 3141 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:08:06.939620 kubelet[3141]: E0117 00:08:06.939113 3141 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-v4lqg_calico-system(b1f66b76-7db3-449d-92fa-faa5ceccc08b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:08:06.941004 containerd[1672]: time="2026-01-17T00:08:06.939179305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:08:07.181921 containerd[1672]: time="2026-01-17T00:08:07.181872703Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:08:07.184541 containerd[1672]: time="2026-01-17T00:08:07.184506227Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:08:07.185380 containerd[1672]: time="2026-01-17T00:08:07.184599427Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:08:07.185380 containerd[1672]: time="2026-01-17T00:08:07.185128708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:08:07.185493 kubelet[3141]: E0117 00:08:07.184733 3141 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:08:07.185493 kubelet[3141]: E0117 00:08:07.184779 3141 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:08:07.185493 kubelet[3141]: E0117 00:08:07.184972 3141 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-894f9f8d4-b5lgh_calico-system(f59d9319-e335-4bfc-a026-d8bbe3696e81): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:08:07.185493 kubelet[3141]: E0117 00:08:07.185033 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-894f9f8d4-b5lgh" podUID="f59d9319-e335-4bfc-a026-d8bbe3696e81" Jan 17 00:08:07.471409 containerd[1672]: time="2026-01-17T00:08:07.471361803Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:08:07.474115 containerd[1672]: time="2026-01-17T00:08:07.474071367Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:08:07.474196 containerd[1672]: time="2026-01-17T00:08:07.474174407Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:08:07.474338 kubelet[3141]: E0117 00:08:07.474285 3141 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:08:07.474338 kubelet[3141]: E0117 00:08:07.474330 3141 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:08:07.474424 kubelet[3141]: E0117 00:08:07.474395 3141 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-v4lqg_calico-system(b1f66b76-7db3-449d-92fa-faa5ceccc08b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:08:07.474479 kubelet[3141]: E0117 00:08:07.474435 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v4lqg" podUID="b1f66b76-7db3-449d-92fa-faa5ceccc08b" Jan 17 00:08:10.679838 containerd[1672]: time="2026-01-17T00:08:10.679766376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:08:11.015659 containerd[1672]: time="2026-01-17T00:08:11.015489777Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:08:11.018017 containerd[1672]: time="2026-01-17T00:08:11.017804580Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:08:11.018017 containerd[1672]: time="2026-01-17T00:08:11.017907340Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:08:11.018205 kubelet[3141]: E0117 00:08:11.018168 3141 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:08:11.019334 kubelet[3141]: E0117 00:08:11.018316 3141 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:08:11.019334 kubelet[3141]: E0117 00:08:11.018398 3141 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-hg9nz_calico-system(92047ce3-1e28-4b15-bb95-00e4947b1fab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:08:11.019334 kubelet[3141]: E0117 00:08:11.018428 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-hg9nz" podUID="92047ce3-1e28-4b15-bb95-00e4947b1fab" Jan 17 00:08:13.681034 kubelet[3141]: E0117 00:08:13.680173 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57d7d85589-ght5v" podUID="bcc0dcb5-6cc0-4aca-b131-0866d93b8e20" Jan 17 00:08:13.681521 kubelet[3141]: E0117 00:08:13.681211 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-fdd85dd66-d8cmt" podUID="f0b7656b-346b-4c7a-84f5-6afacf5c8b98" Jan 17 00:08:15.679385 kubelet[3141]: E0117 00:08:15.679317 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57d7d85589-mrl7f" podUID="4d9310f4-1124-495b-a411-5323618ddd1d" Jan 17 00:08:19.679861 kubelet[3141]: E0117 00:08:19.679809 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v4lqg" podUID="b1f66b76-7db3-449d-92fa-faa5ceccc08b" Jan 17 00:08:20.679260 kubelet[3141]: E0117 00:08:20.679024 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-894f9f8d4-b5lgh" podUID="f59d9319-e335-4bfc-a026-d8bbe3696e81" Jan 17 00:08:24.679236 containerd[1672]: time="2026-01-17T00:08:24.679144244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:08:24.970401 containerd[1672]: time="2026-01-17T00:08:24.970266388Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:08:24.972549 containerd[1672]: time="2026-01-17T00:08:24.972507871Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:08:24.972549 containerd[1672]: time="2026-01-17T00:08:24.972586351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:08:24.972755 kubelet[3141]: E0117 00:08:24.972716 3141 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:08:24.973031 kubelet[3141]: E0117 00:08:24.972763 3141 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:08:24.973031 kubelet[3141]: E0117 00:08:24.972836 3141 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-fdd85dd66-d8cmt_calico-system(f0b7656b-346b-4c7a-84f5-6afacf5c8b98): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:08:24.975759 containerd[1672]: time="2026-01-17T00:08:24.975721635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:08:25.267710 containerd[1672]: time="2026-01-17T00:08:25.267619740Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:08:25.270123 containerd[1672]: time="2026-01-17T00:08:25.270082063Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:08:25.270332 containerd[1672]: time="2026-01-17T00:08:25.270178063Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:08:25.270388 kubelet[3141]: E0117 00:08:25.270348 3141 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:08:25.270431 kubelet[3141]: E0117 00:08:25.270397 3141 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:08:25.270489 kubelet[3141]: E0117 00:08:25.270470 3141 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-fdd85dd66-d8cmt_calico-system(f0b7656b-346b-4c7a-84f5-6afacf5c8b98): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:08:25.270541 kubelet[3141]: E0117 00:08:25.270516 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-fdd85dd66-d8cmt" podUID="f0b7656b-346b-4c7a-84f5-6afacf5c8b98" Jan 17 00:08:25.679147 kubelet[3141]: E0117 00:08:25.678526 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-hg9nz" podUID="92047ce3-1e28-4b15-bb95-00e4947b1fab" Jan 17 00:08:28.681499 containerd[1672]: time="2026-01-17T00:08:28.681260918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:08:28.973635 containerd[1672]: time="2026-01-17T00:08:28.973507522Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:08:28.976024 containerd[1672]: time="2026-01-17T00:08:28.975961646Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:08:28.976117 containerd[1672]: time="2026-01-17T00:08:28.976093166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:08:28.976449 kubelet[3141]: E0117 00:08:28.976248 3141 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:08:28.976449 kubelet[3141]: E0117 00:08:28.976300 3141 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:08:28.976449 kubelet[3141]: E0117 00:08:28.976380 3141 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-57d7d85589-ght5v_calico-apiserver(bcc0dcb5-6cc0-4aca-b131-0866d93b8e20): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:08:28.976449 kubelet[3141]: E0117 00:08:28.976412 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57d7d85589-ght5v" podUID="bcc0dcb5-6cc0-4aca-b131-0866d93b8e20" Jan 17 00:08:29.678788 containerd[1672]: time="2026-01-17T00:08:29.678029197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:08:29.957915 containerd[1672]: time="2026-01-17T00:08:29.957787488Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:08:29.960840 containerd[1672]: time="2026-01-17T00:08:29.960740452Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:08:29.960840 containerd[1672]: time="2026-01-17T00:08:29.960818652Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:08:29.961037 kubelet[3141]: E0117 00:08:29.960990 3141 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:08:29.961111 kubelet[3141]: E0117 00:08:29.961051 3141 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:08:29.961139 kubelet[3141]: E0117 00:08:29.961127 3141 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-57d7d85589-mrl7f_calico-apiserver(4d9310f4-1124-495b-a411-5323618ddd1d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:08:29.961188 kubelet[3141]: E0117 00:08:29.961159 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57d7d85589-mrl7f" podUID="4d9310f4-1124-495b-a411-5323618ddd1d" Jan 17 00:08:31.679059 containerd[1672]: time="2026-01-17T00:08:31.678967976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:08:31.930549 containerd[1672]: time="2026-01-17T00:08:31.930108505Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:08:31.934037 containerd[1672]: time="2026-01-17T00:08:31.933920670Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:08:31.934037 containerd[1672]: time="2026-01-17T00:08:31.933981710Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:08:31.936690 kubelet[3141]: E0117 00:08:31.936119 3141 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:08:31.936690 kubelet[3141]: E0117 00:08:31.936176 3141 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:08:31.936690 kubelet[3141]: E0117 00:08:31.936250 3141 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-v4lqg_calico-system(b1f66b76-7db3-449d-92fa-faa5ceccc08b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:08:31.938088 containerd[1672]: time="2026-01-17T00:08:31.937753276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:08:32.050986 systemd[1]: Started sshd@7-10.200.20.43:22-10.200.16.10:57798.service - OpenSSH per-connection server daemon (10.200.16.10:57798). Jan 17 00:08:32.217481 containerd[1672]: time="2026-01-17T00:08:32.216912806Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:08:32.219511 containerd[1672]: time="2026-01-17T00:08:32.219413450Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:08:32.219589 containerd[1672]: time="2026-01-17T00:08:32.219492170Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:08:32.219678 kubelet[3141]: E0117 00:08:32.219637 3141 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:08:32.219732 kubelet[3141]: E0117 00:08:32.219685 3141 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:08:32.219772 kubelet[3141]: E0117 00:08:32.219750 3141 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-v4lqg_calico-system(b1f66b76-7db3-449d-92fa-faa5ceccc08b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:08:32.219835 kubelet[3141]: E0117 00:08:32.219791 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v4lqg" podUID="b1f66b76-7db3-449d-92fa-faa5ceccc08b" Jan 17 00:08:32.508091 sshd[5581]: Accepted publickey for core from 10.200.16.10 port 57798 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:08:32.511665 sshd[5581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:08:32.519487 systemd-logind[1652]: New session 10 of user core. Jan 17 00:08:32.525259 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:08:32.679264 containerd[1672]: time="2026-01-17T00:08:32.678989085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:08:32.928030 sshd[5581]: pam_unix(sshd:session): session closed for user core Jan 17 00:08:32.933462 systemd[1]: sshd@7-10.200.20.43:22-10.200.16.10:57798.service: Deactivated successfully. Jan 17 00:08:32.936646 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:08:32.938589 systemd-logind[1652]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:08:32.939600 systemd-logind[1652]: Removed session 10. Jan 17 00:08:32.941386 containerd[1672]: time="2026-01-17T00:08:32.941342190Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:08:32.943950 containerd[1672]: time="2026-01-17T00:08:32.943882274Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:08:32.944055 containerd[1672]: time="2026-01-17T00:08:32.944006994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:08:32.944744 kubelet[3141]: E0117 00:08:32.944192 3141 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:08:32.944744 kubelet[3141]: E0117 00:08:32.944244 3141 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:08:32.944744 kubelet[3141]: E0117 00:08:32.944316 3141 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-894f9f8d4-b5lgh_calico-system(f59d9319-e335-4bfc-a026-d8bbe3696e81): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:08:32.944744 kubelet[3141]: E0117 00:08:32.944349 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-894f9f8d4-b5lgh" podUID="f59d9319-e335-4bfc-a026-d8bbe3696e81" Jan 17 00:08:38.010900 systemd[1]: Started sshd@8-10.200.20.43:22-10.200.16.10:57800.service - OpenSSH per-connection server daemon (10.200.16.10:57800). Jan 17 00:08:38.469389 sshd[5597]: Accepted publickey for core from 10.200.16.10 port 57800 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:08:38.470820 sshd[5597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:08:38.479185 systemd-logind[1652]: New session 11 of user core. Jan 17 00:08:38.483352 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:08:38.679635 containerd[1672]: time="2026-01-17T00:08:38.679246896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:08:38.944801 sshd[5597]: pam_unix(sshd:session): session closed for user core Jan 17 00:08:38.949155 systemd[1]: sshd@8-10.200.20.43:22-10.200.16.10:57800.service: Deactivated successfully. Jan 17 00:08:38.953598 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:08:38.954561 systemd-logind[1652]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:08:38.955026 containerd[1672]: time="2026-01-17T00:08:38.954978522Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:08:38.958147 systemd-logind[1652]: Removed session 11. Jan 17 00:08:38.959306 containerd[1672]: time="2026-01-17T00:08:38.959250448Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:08:38.959412 containerd[1672]: time="2026-01-17T00:08:38.959366968Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:08:38.960246 kubelet[3141]: E0117 00:08:38.959555 3141 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:08:38.960246 kubelet[3141]: E0117 00:08:38.959600 3141 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:08:38.960246 kubelet[3141]: E0117 00:08:38.959671 3141 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-hg9nz_calico-system(92047ce3-1e28-4b15-bb95-00e4947b1fab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:08:38.960246 kubelet[3141]: E0117 00:08:38.959701 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-hg9nz" podUID="92047ce3-1e28-4b15-bb95-00e4947b1fab" Jan 17 00:08:40.681550 kubelet[3141]: E0117 00:08:40.681419 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-fdd85dd66-d8cmt" podUID="f0b7656b-346b-4c7a-84f5-6afacf5c8b98" Jan 17 00:08:42.679805 kubelet[3141]: E0117 00:08:42.679659 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57d7d85589-mrl7f" podUID="4d9310f4-1124-495b-a411-5323618ddd1d" Jan 17 00:08:43.679076 kubelet[3141]: E0117 00:08:43.678640 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57d7d85589-ght5v" podUID="bcc0dcb5-6cc0-4aca-b131-0866d93b8e20" Jan 17 00:08:43.681063 kubelet[3141]: E0117 00:08:43.679786 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v4lqg" podUID="b1f66b76-7db3-449d-92fa-faa5ceccc08b" Jan 17 00:08:44.018443 systemd[1]: Started sshd@9-10.200.20.43:22-10.200.16.10:39276.service - OpenSSH per-connection server daemon (10.200.16.10:39276). Jan 17 00:08:44.511065 sshd[5631]: Accepted publickey for core from 10.200.16.10 port 39276 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:08:44.512928 sshd[5631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:08:44.517120 systemd-logind[1652]: New session 12 of user core. Jan 17 00:08:44.522204 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:08:44.949974 sshd[5631]: pam_unix(sshd:session): session closed for user core Jan 17 00:08:44.955257 systemd[1]: sshd@9-10.200.20.43:22-10.200.16.10:39276.service: Deactivated successfully. Jan 17 00:08:44.960626 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:08:44.964581 systemd-logind[1652]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:08:44.966283 systemd-logind[1652]: Removed session 12. Jan 17 00:08:45.046683 systemd[1]: Started sshd@10-10.200.20.43:22-10.200.16.10:39282.service - OpenSSH per-connection server daemon (10.200.16.10:39282). Jan 17 00:08:45.535695 sshd[5645]: Accepted publickey for core from 10.200.16.10 port 39282 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:08:45.537211 sshd[5645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:08:45.541837 systemd-logind[1652]: New session 13 of user core. Jan 17 00:08:45.548233 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:08:45.991898 sshd[5645]: pam_unix(sshd:session): session closed for user core Jan 17 00:08:45.995168 systemd[1]: sshd@10-10.200.20.43:22-10.200.16.10:39282.service: Deactivated successfully. Jan 17 00:08:45.997529 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:08:45.998863 systemd-logind[1652]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:08:45.999822 systemd-logind[1652]: Removed session 13. Jan 17 00:08:46.078079 systemd[1]: Started sshd@11-10.200.20.43:22-10.200.16.10:39290.service - OpenSSH per-connection server daemon (10.200.16.10:39290). Jan 17 00:08:46.575940 sshd[5656]: Accepted publickey for core from 10.200.16.10 port 39290 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:08:46.579654 sshd[5656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:08:46.583724 systemd-logind[1652]: New session 14 of user core. Jan 17 00:08:46.589751 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:08:46.681127 kubelet[3141]: E0117 00:08:46.681016 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-894f9f8d4-b5lgh" podUID="f59d9319-e335-4bfc-a026-d8bbe3696e81" Jan 17 00:08:47.016934 sshd[5656]: pam_unix(sshd:session): session closed for user core Jan 17 00:08:47.023766 systemd[1]: sshd@11-10.200.20.43:22-10.200.16.10:39290.service: Deactivated successfully. Jan 17 00:08:47.027467 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:08:47.029176 systemd-logind[1652]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:08:47.030688 systemd-logind[1652]: Removed session 14. Jan 17 00:08:47.437626 containerd[1672]: time="2026-01-17T00:08:47.437238224Z" level=info msg="StopPodSandbox for \"11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61\"" Jan 17 00:08:47.531974 containerd[1672]: 2026-01-17 00:08:47.497 [WARNING][5682] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4c16a83c6c-k8s-goldmane--7c778bb748--hg9nz-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"92047ce3-1e28-4b15-bb95-00e4947b1fab", ResourceVersion:"1293", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 7, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4c16a83c6c", ContainerID:"bb8a4a185bba05d035b798913977d2597d8ef4a46b86130643c3b781c9615be4", Pod:"goldmane-7c778bb748-hg9nz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.12.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidbc8046b587", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:08:47.531974 containerd[1672]: 2026-01-17 00:08:47.497 [INFO][5682] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" Jan 17 00:08:47.531974 containerd[1672]: 2026-01-17 00:08:47.497 [INFO][5682] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" iface="eth0" netns="" Jan 17 00:08:47.531974 containerd[1672]: 2026-01-17 00:08:47.497 [INFO][5682] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" Jan 17 00:08:47.531974 containerd[1672]: 2026-01-17 00:08:47.497 [INFO][5682] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" Jan 17 00:08:47.531974 containerd[1672]: 2026-01-17 00:08:47.517 [INFO][5690] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" HandleID="k8s-pod-network.11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-goldmane--7c778bb748--hg9nz-eth0" Jan 17 00:08:47.531974 containerd[1672]: 2026-01-17 00:08:47.517 [INFO][5690] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:08:47.531974 containerd[1672]: 2026-01-17 00:08:47.517 [INFO][5690] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:08:47.531974 containerd[1672]: 2026-01-17 00:08:47.526 [WARNING][5690] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" HandleID="k8s-pod-network.11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-goldmane--7c778bb748--hg9nz-eth0" Jan 17 00:08:47.531974 containerd[1672]: 2026-01-17 00:08:47.526 [INFO][5690] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" HandleID="k8s-pod-network.11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-goldmane--7c778bb748--hg9nz-eth0" Jan 17 00:08:47.531974 containerd[1672]: 2026-01-17 00:08:47.528 [INFO][5690] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:08:47.531974 containerd[1672]: 2026-01-17 00:08:47.530 [INFO][5682] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" Jan 17 00:08:47.531974 containerd[1672]: time="2026-01-17T00:08:47.531814682Z" level=info msg="TearDown network for sandbox \"11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61\" successfully" Jan 17 00:08:47.531974 containerd[1672]: time="2026-01-17T00:08:47.531839082Z" level=info msg="StopPodSandbox for \"11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61\" returns successfully" Jan 17 00:08:47.533431 containerd[1672]: time="2026-01-17T00:08:47.533120364Z" level=info msg="RemovePodSandbox for \"11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61\"" Jan 17 00:08:47.533431 containerd[1672]: time="2026-01-17T00:08:47.533158564Z" level=info msg="Forcibly stopping sandbox \"11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61\"" Jan 17 00:08:47.613106 containerd[1672]: 2026-01-17 00:08:47.574 [WARNING][5704] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4c16a83c6c-k8s-goldmane--7c778bb748--hg9nz-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"92047ce3-1e28-4b15-bb95-00e4947b1fab", ResourceVersion:"1293", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 7, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4c16a83c6c", ContainerID:"bb8a4a185bba05d035b798913977d2597d8ef4a46b86130643c3b781c9615be4", Pod:"goldmane-7c778bb748-hg9nz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.12.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidbc8046b587", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:08:47.613106 containerd[1672]: 2026-01-17 00:08:47.574 [INFO][5704] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" Jan 17 00:08:47.613106 containerd[1672]: 2026-01-17 00:08:47.574 [INFO][5704] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" iface="eth0" netns="" Jan 17 00:08:47.613106 containerd[1672]: 2026-01-17 00:08:47.574 [INFO][5704] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" Jan 17 00:08:47.613106 containerd[1672]: 2026-01-17 00:08:47.574 [INFO][5704] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" Jan 17 00:08:47.613106 containerd[1672]: 2026-01-17 00:08:47.595 [INFO][5711] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" HandleID="k8s-pod-network.11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-goldmane--7c778bb748--hg9nz-eth0" Jan 17 00:08:47.613106 containerd[1672]: 2026-01-17 00:08:47.595 [INFO][5711] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:08:47.613106 containerd[1672]: 2026-01-17 00:08:47.595 [INFO][5711] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:08:47.613106 containerd[1672]: 2026-01-17 00:08:47.606 [WARNING][5711] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" HandleID="k8s-pod-network.11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-goldmane--7c778bb748--hg9nz-eth0" Jan 17 00:08:47.613106 containerd[1672]: 2026-01-17 00:08:47.606 [INFO][5711] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" HandleID="k8s-pod-network.11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-goldmane--7c778bb748--hg9nz-eth0" Jan 17 00:08:47.613106 containerd[1672]: 2026-01-17 00:08:47.607 [INFO][5711] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:08:47.613106 containerd[1672]: 2026-01-17 00:08:47.609 [INFO][5704] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61" Jan 17 00:08:47.613106 containerd[1672]: time="2026-01-17T00:08:47.612198439Z" level=info msg="TearDown network for sandbox \"11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61\" successfully" Jan 17 00:08:47.672483 containerd[1672]: time="2026-01-17T00:08:47.672423886Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:08:47.672716 containerd[1672]: time="2026-01-17T00:08:47.672696447Z" level=info msg="RemovePodSandbox \"11196c56939e6ef5afc1eda326a99b6846511cd50a98b945ec98adfbc8908c61\" returns successfully" Jan 17 00:08:47.673309 containerd[1672]: time="2026-01-17T00:08:47.673274407Z" level=info msg="StopPodSandbox for \"c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d\"" Jan 17 00:08:47.753288 containerd[1672]: 2026-01-17 00:08:47.712 [WARNING][5725] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4c16a83c6c-k8s-csi--node--driver--v4lqg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b1f66b76-7db3-449d-92fa-faa5ceccc08b", ResourceVersion:"1331", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 7, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4c16a83c6c", ContainerID:"26b4a812d546f91a2d81e79ddb558c9bd04a08f1e134de90ffabcbfd7e50b41c", Pod:"csi-node-driver-v4lqg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.12.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calida7d340b51f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:08:47.753288 containerd[1672]: 2026-01-17 00:08:47.712 [INFO][5725] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" Jan 17 00:08:47.753288 containerd[1672]: 2026-01-17 00:08:47.712 [INFO][5725] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" iface="eth0" netns="" Jan 17 00:08:47.753288 containerd[1672]: 2026-01-17 00:08:47.712 [INFO][5725] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" Jan 17 00:08:47.753288 containerd[1672]: 2026-01-17 00:08:47.712 [INFO][5725] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" Jan 17 00:08:47.753288 containerd[1672]: 2026-01-17 00:08:47.734 [INFO][5732] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" HandleID="k8s-pod-network.c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-csi--node--driver--v4lqg-eth0" Jan 17 00:08:47.753288 containerd[1672]: 2026-01-17 00:08:47.734 [INFO][5732] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:08:47.753288 containerd[1672]: 2026-01-17 00:08:47.734 [INFO][5732] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:08:47.753288 containerd[1672]: 2026-01-17 00:08:47.743 [WARNING][5732] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" HandleID="k8s-pod-network.c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-csi--node--driver--v4lqg-eth0" Jan 17 00:08:47.753288 containerd[1672]: 2026-01-17 00:08:47.743 [INFO][5732] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" HandleID="k8s-pod-network.c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-csi--node--driver--v4lqg-eth0" Jan 17 00:08:47.753288 containerd[1672]: 2026-01-17 00:08:47.744 [INFO][5732] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:08:47.753288 containerd[1672]: 2026-01-17 00:08:47.750 [INFO][5725] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" Jan 17 00:08:47.754710 containerd[1672]: time="2026-01-17T00:08:47.753657964Z" level=info msg="TearDown network for sandbox \"c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d\" successfully" Jan 17 00:08:47.754710 containerd[1672]: time="2026-01-17T00:08:47.753686004Z" level=info msg="StopPodSandbox for \"c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d\" returns successfully" Jan 17 00:08:47.754710 containerd[1672]: time="2026-01-17T00:08:47.754218165Z" level=info msg="RemovePodSandbox for \"c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d\"" Jan 17 00:08:47.754710 containerd[1672]: time="2026-01-17T00:08:47.754247685Z" level=info msg="Forcibly stopping sandbox \"c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d\"" Jan 17 00:08:47.850834 containerd[1672]: 2026-01-17 00:08:47.796 [WARNING][5746] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4c16a83c6c-k8s-csi--node--driver--v4lqg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b1f66b76-7db3-449d-92fa-faa5ceccc08b", ResourceVersion:"1331", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 7, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4c16a83c6c", ContainerID:"26b4a812d546f91a2d81e79ddb558c9bd04a08f1e134de90ffabcbfd7e50b41c", Pod:"csi-node-driver-v4lqg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.12.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calida7d340b51f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:08:47.850834 containerd[1672]: 2026-01-17 00:08:47.796 [INFO][5746] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" Jan 17 00:08:47.850834 containerd[1672]: 2026-01-17 00:08:47.796 [INFO][5746] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" iface="eth0" netns="" Jan 17 00:08:47.850834 containerd[1672]: 2026-01-17 00:08:47.796 [INFO][5746] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" Jan 17 00:08:47.850834 containerd[1672]: 2026-01-17 00:08:47.797 [INFO][5746] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" Jan 17 00:08:47.850834 containerd[1672]: 2026-01-17 00:08:47.824 [INFO][5753] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" HandleID="k8s-pod-network.c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-csi--node--driver--v4lqg-eth0" Jan 17 00:08:47.850834 containerd[1672]: 2026-01-17 00:08:47.824 [INFO][5753] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:08:47.850834 containerd[1672]: 2026-01-17 00:08:47.824 [INFO][5753] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:08:47.850834 containerd[1672]: 2026-01-17 00:08:47.842 [WARNING][5753] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" HandleID="k8s-pod-network.c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-csi--node--driver--v4lqg-eth0" Jan 17 00:08:47.850834 containerd[1672]: 2026-01-17 00:08:47.842 [INFO][5753] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" HandleID="k8s-pod-network.c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-csi--node--driver--v4lqg-eth0" Jan 17 00:08:47.850834 containerd[1672]: 2026-01-17 00:08:47.844 [INFO][5753] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:08:47.850834 containerd[1672]: 2026-01-17 00:08:47.848 [INFO][5746] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d" Jan 17 00:08:47.851779 containerd[1672]: time="2026-01-17T00:08:47.851380106Z" level=info msg="TearDown network for sandbox \"c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d\" successfully" Jan 17 00:08:47.875822 containerd[1672]: time="2026-01-17T00:08:47.875690741Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:08:47.875822 containerd[1672]: time="2026-01-17T00:08:47.875792382Z" level=info msg="RemovePodSandbox \"c8071d5c6f9864801ded1cdcbe071850e837a646e1ae91a03595b63f04f7894d\" returns successfully" Jan 17 00:08:47.876571 containerd[1672]: time="2026-01-17T00:08:47.876538503Z" level=info msg="StopPodSandbox for \"192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386\"" Jan 17 00:08:47.983843 containerd[1672]: 2026-01-17 00:08:47.938 [WARNING][5767] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--mrl7f-eth0", GenerateName:"calico-apiserver-57d7d85589-", Namespace:"calico-apiserver", SelfLink:"", UID:"4d9310f4-1124-495b-a411-5323618ddd1d", ResourceVersion:"1325", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57d7d85589", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4c16a83c6c", ContainerID:"9efc895de6148776b61761b1284b7376b9d09509ef9c7714d1837c791778b699", Pod:"calico-apiserver-57d7d85589-mrl7f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali038a4ac797f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:08:47.983843 containerd[1672]: 2026-01-17 00:08:47.938 [INFO][5767] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" Jan 17 00:08:47.983843 containerd[1672]: 2026-01-17 00:08:47.938 [INFO][5767] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" iface="eth0" netns="" Jan 17 00:08:47.983843 containerd[1672]: 2026-01-17 00:08:47.938 [INFO][5767] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" Jan 17 00:08:47.983843 containerd[1672]: 2026-01-17 00:08:47.938 [INFO][5767] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" Jan 17 00:08:47.983843 containerd[1672]: 2026-01-17 00:08:47.963 [INFO][5774] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" HandleID="k8s-pod-network.192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--mrl7f-eth0" Jan 17 00:08:47.983843 containerd[1672]: 2026-01-17 00:08:47.963 [INFO][5774] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:08:47.983843 containerd[1672]: 2026-01-17 00:08:47.963 [INFO][5774] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:08:47.983843 containerd[1672]: 2026-01-17 00:08:47.975 [WARNING][5774] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" HandleID="k8s-pod-network.192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--mrl7f-eth0" Jan 17 00:08:47.983843 containerd[1672]: 2026-01-17 00:08:47.975 [INFO][5774] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" HandleID="k8s-pod-network.192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--mrl7f-eth0" Jan 17 00:08:47.983843 containerd[1672]: 2026-01-17 00:08:47.978 [INFO][5774] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:08:47.983843 containerd[1672]: 2026-01-17 00:08:47.981 [INFO][5767] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" Jan 17 00:08:47.983843 containerd[1672]: time="2026-01-17T00:08:47.983708898Z" level=info msg="TearDown network for sandbox \"192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386\" successfully" Jan 17 00:08:47.983843 containerd[1672]: time="2026-01-17T00:08:47.983740058Z" level=info msg="StopPodSandbox for \"192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386\" returns successfully" Jan 17 00:08:47.985102 containerd[1672]: time="2026-01-17T00:08:47.984793540Z" level=info msg="RemovePodSandbox for \"192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386\"" Jan 17 00:08:47.985102 containerd[1672]: time="2026-01-17T00:08:47.984826820Z" level=info msg="Forcibly stopping sandbox \"192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386\"" Jan 17 00:08:48.073408 containerd[1672]: 2026-01-17 00:08:48.025 [WARNING][5789] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--mrl7f-eth0", GenerateName:"calico-apiserver-57d7d85589-", Namespace:"calico-apiserver", SelfLink:"", UID:"4d9310f4-1124-495b-a411-5323618ddd1d", ResourceVersion:"1325", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57d7d85589", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4c16a83c6c", ContainerID:"9efc895de6148776b61761b1284b7376b9d09509ef9c7714d1837c791778b699", Pod:"calico-apiserver-57d7d85589-mrl7f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali038a4ac797f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:08:48.073408 containerd[1672]: 2026-01-17 00:08:48.026 [INFO][5789] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" Jan 17 00:08:48.073408 containerd[1672]: 2026-01-17 00:08:48.026 [INFO][5789] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" iface="eth0" netns="" Jan 17 00:08:48.073408 containerd[1672]: 2026-01-17 00:08:48.026 [INFO][5789] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" Jan 17 00:08:48.073408 containerd[1672]: 2026-01-17 00:08:48.026 [INFO][5789] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" Jan 17 00:08:48.073408 containerd[1672]: 2026-01-17 00:08:48.052 [INFO][5796] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" HandleID="k8s-pod-network.192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--mrl7f-eth0" Jan 17 00:08:48.073408 containerd[1672]: 2026-01-17 00:08:48.052 [INFO][5796] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:08:48.073408 containerd[1672]: 2026-01-17 00:08:48.052 [INFO][5796] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:08:48.073408 containerd[1672]: 2026-01-17 00:08:48.061 [WARNING][5796] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" HandleID="k8s-pod-network.192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--mrl7f-eth0" Jan 17 00:08:48.073408 containerd[1672]: 2026-01-17 00:08:48.061 [INFO][5796] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" HandleID="k8s-pod-network.192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--mrl7f-eth0" Jan 17 00:08:48.073408 containerd[1672]: 2026-01-17 00:08:48.068 [INFO][5796] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:08:48.073408 containerd[1672]: 2026-01-17 00:08:48.070 [INFO][5789] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386" Jan 17 00:08:48.073408 containerd[1672]: time="2026-01-17T00:08:48.073265789Z" level=info msg="TearDown network for sandbox \"192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386\" successfully" Jan 17 00:08:48.082059 containerd[1672]: time="2026-01-17T00:08:48.081755761Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:08:48.082059 containerd[1672]: time="2026-01-17T00:08:48.081841921Z" level=info msg="RemovePodSandbox \"192033b471efa24e43217f87f4b5b4617dae4e1edf2b434486f195a02d5db386\" returns successfully" Jan 17 00:08:48.082365 containerd[1672]: time="2026-01-17T00:08:48.082315922Z" level=info msg="StopPodSandbox for \"01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9\"" Jan 17 00:08:48.188243 containerd[1672]: 2026-01-17 00:08:48.143 [WARNING][5811] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--ght5v-eth0", GenerateName:"calico-apiserver-57d7d85589-", Namespace:"calico-apiserver", SelfLink:"", UID:"bcc0dcb5-6cc0-4aca-b131-0866d93b8e20", ResourceVersion:"1328", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57d7d85589", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4c16a83c6c", ContainerID:"573c49c00a51309558d10f47f8bedf9a8331b5559b86f422946aace776c4dd2a", Pod:"calico-apiserver-57d7d85589-ght5v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif6fe12a186f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:08:48.188243 containerd[1672]: 2026-01-17 00:08:48.143 [INFO][5811] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" Jan 17 00:08:48.188243 containerd[1672]: 2026-01-17 00:08:48.143 [INFO][5811] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" iface="eth0" netns="" Jan 17 00:08:48.188243 containerd[1672]: 2026-01-17 00:08:48.143 [INFO][5811] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" Jan 17 00:08:48.188243 containerd[1672]: 2026-01-17 00:08:48.143 [INFO][5811] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" Jan 17 00:08:48.188243 containerd[1672]: 2026-01-17 00:08:48.171 [INFO][5819] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" HandleID="k8s-pod-network.01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--ght5v-eth0" Jan 17 00:08:48.188243 containerd[1672]: 2026-01-17 00:08:48.171 [INFO][5819] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:08:48.188243 containerd[1672]: 2026-01-17 00:08:48.171 [INFO][5819] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:08:48.188243 containerd[1672]: 2026-01-17 00:08:48.183 [WARNING][5819] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" HandleID="k8s-pod-network.01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--ght5v-eth0" Jan 17 00:08:48.188243 containerd[1672]: 2026-01-17 00:08:48.183 [INFO][5819] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" HandleID="k8s-pod-network.01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--ght5v-eth0" Jan 17 00:08:48.188243 containerd[1672]: 2026-01-17 00:08:48.184 [INFO][5819] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:08:48.188243 containerd[1672]: 2026-01-17 00:08:48.186 [INFO][5811] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" Jan 17 00:08:48.189258 containerd[1672]: time="2026-01-17T00:08:48.189224357Z" level=info msg="TearDown network for sandbox \"01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9\" successfully" Jan 17 00:08:48.189258 containerd[1672]: time="2026-01-17T00:08:48.189255877Z" level=info msg="StopPodSandbox for \"01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9\" returns successfully" Jan 17 00:08:48.190178 containerd[1672]: time="2026-01-17T00:08:48.189741398Z" level=info msg="RemovePodSandbox for \"01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9\"" Jan 17 00:08:48.190178 containerd[1672]: time="2026-01-17T00:08:48.189783918Z" level=info msg="Forcibly stopping sandbox \"01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9\"" Jan 17 00:08:48.274380 containerd[1672]: 2026-01-17 00:08:48.233 [WARNING][5834] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--ght5v-eth0", GenerateName:"calico-apiserver-57d7d85589-", Namespace:"calico-apiserver", SelfLink:"", UID:"bcc0dcb5-6cc0-4aca-b131-0866d93b8e20", ResourceVersion:"1328", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 7, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57d7d85589", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4c16a83c6c", ContainerID:"573c49c00a51309558d10f47f8bedf9a8331b5559b86f422946aace776c4dd2a", Pod:"calico-apiserver-57d7d85589-ght5v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif6fe12a186f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:08:48.274380 containerd[1672]: 2026-01-17 00:08:48.234 [INFO][5834] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" Jan 17 00:08:48.274380 containerd[1672]: 2026-01-17 00:08:48.234 [INFO][5834] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" iface="eth0" netns="" Jan 17 00:08:48.274380 containerd[1672]: 2026-01-17 00:08:48.234 [INFO][5834] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" Jan 17 00:08:48.274380 containerd[1672]: 2026-01-17 00:08:48.234 [INFO][5834] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" Jan 17 00:08:48.274380 containerd[1672]: 2026-01-17 00:08:48.256 [INFO][5841] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" HandleID="k8s-pod-network.01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--ght5v-eth0" Jan 17 00:08:48.274380 containerd[1672]: 2026-01-17 00:08:48.256 [INFO][5841] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:08:48.274380 containerd[1672]: 2026-01-17 00:08:48.256 [INFO][5841] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:08:48.274380 containerd[1672]: 2026-01-17 00:08:48.267 [WARNING][5841] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" HandleID="k8s-pod-network.01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--ght5v-eth0" Jan 17 00:08:48.274380 containerd[1672]: 2026-01-17 00:08:48.267 [INFO][5841] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" HandleID="k8s-pod-network.01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--apiserver--57d7d85589--ght5v-eth0" Jan 17 00:08:48.274380 containerd[1672]: 2026-01-17 00:08:48.269 [INFO][5841] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:08:48.274380 containerd[1672]: 2026-01-17 00:08:48.271 [INFO][5834] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9" Jan 17 00:08:48.275119 containerd[1672]: time="2026-01-17T00:08:48.274937282Z" level=info msg="TearDown network for sandbox \"01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9\" successfully" Jan 17 00:08:48.334998 containerd[1672]: time="2026-01-17T00:08:48.334683248Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:08:48.334998 containerd[1672]: time="2026-01-17T00:08:48.334754568Z" level=info msg="RemovePodSandbox \"01b119d76262358fe06e2aded123d0c0c4053cf0ade23eb31018d5aed7adddb9\" returns successfully" Jan 17 00:08:48.336414 containerd[1672]: time="2026-01-17T00:08:48.336373331Z" level=info msg="StopPodSandbox for \"94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e\"" Jan 17 00:08:48.445553 containerd[1672]: 2026-01-17 00:08:48.389 [WARNING][5855] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--xzzqx-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c75cd337-98e1-4c98-836d-ddd5677f5fcd", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 6, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4c16a83c6c", ContainerID:"2fadd6c6910dacdb6e70d1ac98a04df5a4f4f62ab0418a5ec40f1be93c0d9db2", Pod:"coredns-66bc5c9577-xzzqx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliba2a4c5921c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:08:48.445553 containerd[1672]: 2026-01-17 00:08:48.390 [INFO][5855] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" Jan 17 00:08:48.445553 containerd[1672]: 2026-01-17 00:08:48.390 [INFO][5855] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" iface="eth0" netns="" Jan 17 00:08:48.445553 containerd[1672]: 2026-01-17 00:08:48.390 [INFO][5855] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" Jan 17 00:08:48.445553 containerd[1672]: 2026-01-17 00:08:48.390 [INFO][5855] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" Jan 17 00:08:48.445553 containerd[1672]: 2026-01-17 00:08:48.427 [INFO][5863] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" HandleID="k8s-pod-network.94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--xzzqx-eth0" Jan 17 00:08:48.445553 containerd[1672]: 2026-01-17 00:08:48.428 [INFO][5863] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:08:48.445553 containerd[1672]: 2026-01-17 00:08:48.428 [INFO][5863] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:08:48.445553 containerd[1672]: 2026-01-17 00:08:48.439 [WARNING][5863] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" HandleID="k8s-pod-network.94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--xzzqx-eth0" Jan 17 00:08:48.445553 containerd[1672]: 2026-01-17 00:08:48.439 [INFO][5863] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" HandleID="k8s-pod-network.94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--xzzqx-eth0" Jan 17 00:08:48.445553 containerd[1672]: 2026-01-17 00:08:48.440 [INFO][5863] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:08:48.445553 containerd[1672]: 2026-01-17 00:08:48.443 [INFO][5855] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" Jan 17 00:08:48.445553 containerd[1672]: time="2026-01-17T00:08:48.445433969Z" level=info msg="TearDown network for sandbox \"94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e\" successfully" Jan 17 00:08:48.445553 containerd[1672]: time="2026-01-17T00:08:48.445461529Z" level=info msg="StopPodSandbox for \"94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e\" returns successfully" Jan 17 00:08:48.447462 containerd[1672]: time="2026-01-17T00:08:48.447430212Z" level=info msg="RemovePodSandbox for \"94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e\"" Jan 17 00:08:48.447536 containerd[1672]: time="2026-01-17T00:08:48.447470692Z" level=info msg="Forcibly stopping sandbox \"94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e\"" Jan 17 00:08:48.565095 containerd[1672]: 2026-01-17 00:08:48.500 [WARNING][5877] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--xzzqx-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c75cd337-98e1-4c98-836d-ddd5677f5fcd", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 6, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4c16a83c6c", ContainerID:"2fadd6c6910dacdb6e70d1ac98a04df5a4f4f62ab0418a5ec40f1be93c0d9db2", Pod:"coredns-66bc5c9577-xzzqx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliba2a4c5921c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:08:48.565095 containerd[1672]: 2026-01-17 00:08:48.500 [INFO][5877] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" Jan 17 00:08:48.565095 containerd[1672]: 2026-01-17 00:08:48.500 [INFO][5877] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" iface="eth0" netns="" Jan 17 00:08:48.565095 containerd[1672]: 2026-01-17 00:08:48.500 [INFO][5877] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" Jan 17 00:08:48.565095 containerd[1672]: 2026-01-17 00:08:48.500 [INFO][5877] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" Jan 17 00:08:48.565095 containerd[1672]: 2026-01-17 00:08:48.531 [INFO][5885] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" HandleID="k8s-pod-network.94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--xzzqx-eth0" Jan 17 00:08:48.565095 containerd[1672]: 2026-01-17 00:08:48.531 [INFO][5885] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:08:48.565095 containerd[1672]: 2026-01-17 00:08:48.531 [INFO][5885] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:08:48.565095 containerd[1672]: 2026-01-17 00:08:48.553 [WARNING][5885] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" HandleID="k8s-pod-network.94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--xzzqx-eth0" Jan 17 00:08:48.565095 containerd[1672]: 2026-01-17 00:08:48.553 [INFO][5885] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" HandleID="k8s-pod-network.94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--xzzqx-eth0" Jan 17 00:08:48.565095 containerd[1672]: 2026-01-17 00:08:48.560 [INFO][5885] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:08:48.565095 containerd[1672]: 2026-01-17 00:08:48.561 [INFO][5877] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e" Jan 17 00:08:48.565095 containerd[1672]: time="2026-01-17T00:08:48.564720223Z" level=info msg="TearDown network for sandbox \"94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e\" successfully" Jan 17 00:08:50.520533 containerd[1672]: time="2026-01-17T00:08:50.520481824Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:08:50.521366 containerd[1672]: time="2026-01-17T00:08:50.520911505Z" level=info msg="RemovePodSandbox \"94a8c35ceb5e971c0d348184bdf3e3fb9f3fb78ab15db0d33ac3228098f3b35e\" returns successfully" Jan 17 00:08:50.522821 containerd[1672]: time="2026-01-17T00:08:50.522447667Z" level=info msg="StopPodSandbox for \"104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7\"" Jan 17 00:08:50.615659 containerd[1672]: 2026-01-17 00:08:50.578 [WARNING][5900] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--m66dw-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0ad856fe-523a-4a16-bb22-1a01d08264e2", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 6, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4c16a83c6c", ContainerID:"c38dfcb1a1631ff2e0612ed363efad00d38918ed2a1354f777da3d673899d94a", Pod:"coredns-66bc5c9577-m66dw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali456f4fb461b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:08:50.615659 containerd[1672]: 2026-01-17 00:08:50.578 [INFO][5900] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" Jan 17 00:08:50.615659 containerd[1672]: 2026-01-17 00:08:50.578 [INFO][5900] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" iface="eth0" netns="" Jan 17 00:08:50.615659 containerd[1672]: 2026-01-17 00:08:50.578 [INFO][5900] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" Jan 17 00:08:50.615659 containerd[1672]: 2026-01-17 00:08:50.578 [INFO][5900] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" Jan 17 00:08:50.615659 containerd[1672]: 2026-01-17 00:08:50.598 [INFO][5907] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" HandleID="k8s-pod-network.104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--m66dw-eth0" Jan 17 00:08:50.615659 containerd[1672]: 2026-01-17 00:08:50.598 [INFO][5907] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:08:50.615659 containerd[1672]: 2026-01-17 00:08:50.598 [INFO][5907] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:08:50.615659 containerd[1672]: 2026-01-17 00:08:50.608 [WARNING][5907] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" HandleID="k8s-pod-network.104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--m66dw-eth0" Jan 17 00:08:50.615659 containerd[1672]: 2026-01-17 00:08:50.608 [INFO][5907] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" HandleID="k8s-pod-network.104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--m66dw-eth0" Jan 17 00:08:50.615659 containerd[1672]: 2026-01-17 00:08:50.610 [INFO][5907] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:08:50.615659 containerd[1672]: 2026-01-17 00:08:50.612 [INFO][5900] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" Jan 17 00:08:50.615659 containerd[1672]: time="2026-01-17T00:08:50.615610202Z" level=info msg="TearDown network for sandbox \"104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7\" successfully" Jan 17 00:08:50.616694 containerd[1672]: time="2026-01-17T00:08:50.615636602Z" level=info msg="StopPodSandbox for \"104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7\" returns successfully" Jan 17 00:08:50.617181 containerd[1672]: time="2026-01-17T00:08:50.616868164Z" level=info msg="RemovePodSandbox for \"104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7\"" Jan 17 00:08:50.617181 containerd[1672]: time="2026-01-17T00:08:50.616904284Z" level=info msg="Forcibly stopping sandbox \"104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7\"" Jan 17 00:08:50.718226 containerd[1672]: 2026-01-17 00:08:50.667 [WARNING][5921] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--m66dw-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0ad856fe-523a-4a16-bb22-1a01d08264e2", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 6, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4c16a83c6c", ContainerID:"c38dfcb1a1631ff2e0612ed363efad00d38918ed2a1354f777da3d673899d94a", Pod:"coredns-66bc5c9577-m66dw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali456f4fb461b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:08:50.718226 containerd[1672]: 2026-01-17 00:08:50.668 [INFO][5921] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" Jan 17 00:08:50.718226 containerd[1672]: 2026-01-17 00:08:50.668 [INFO][5921] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" iface="eth0" netns="" Jan 17 00:08:50.718226 containerd[1672]: 2026-01-17 00:08:50.668 [INFO][5921] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" Jan 17 00:08:50.718226 containerd[1672]: 2026-01-17 00:08:50.668 [INFO][5921] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" Jan 17 00:08:50.718226 containerd[1672]: 2026-01-17 00:08:50.702 [INFO][5928] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" HandleID="k8s-pod-network.104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--m66dw-eth0" Jan 17 00:08:50.718226 containerd[1672]: 2026-01-17 00:08:50.703 [INFO][5928] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:08:50.718226 containerd[1672]: 2026-01-17 00:08:50.703 [INFO][5928] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:08:50.718226 containerd[1672]: 2026-01-17 00:08:50.712 [WARNING][5928] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" HandleID="k8s-pod-network.104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--m66dw-eth0" Jan 17 00:08:50.718226 containerd[1672]: 2026-01-17 00:08:50.712 [INFO][5928] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" HandleID="k8s-pod-network.104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-coredns--66bc5c9577--m66dw-eth0" Jan 17 00:08:50.718226 containerd[1672]: 2026-01-17 00:08:50.714 [INFO][5928] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:08:50.718226 containerd[1672]: 2026-01-17 00:08:50.715 [INFO][5921] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7" Jan 17 00:08:50.718226 containerd[1672]: time="2026-01-17T00:08:50.718184511Z" level=info msg="TearDown network for sandbox \"104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7\" successfully" Jan 17 00:08:50.773724 containerd[1672]: time="2026-01-17T00:08:50.772088990Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:08:50.773724 containerd[1672]: time="2026-01-17T00:08:50.772202350Z" level=info msg="RemovePodSandbox \"104b57a86216977755c9e21ec4c47588fc97422769aedaf511b3bec8bfc996c7\" returns successfully" Jan 17 00:08:50.773724 containerd[1672]: time="2026-01-17T00:08:50.772776791Z" level=info msg="StopPodSandbox for \"9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca\"" Jan 17 00:08:50.859679 containerd[1672]: 2026-01-17 00:08:50.821 [WARNING][5942] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4c16a83c6c-k8s-calico--kube--controllers--894f9f8d4--b5lgh-eth0", GenerateName:"calico-kube-controllers-894f9f8d4-", Namespace:"calico-system", SelfLink:"", UID:"f59d9319-e335-4bfc-a026-d8bbe3696e81", ResourceVersion:"1361", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 7, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"894f9f8d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4c16a83c6c", ContainerID:"b4cb5c82385090c5ce990a7dc7baf64e5f8e7a3972c7e3d5172ec19bc844939f", Pod:"calico-kube-controllers-894f9f8d4-b5lgh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali73bfb212120", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:08:50.859679 containerd[1672]: 2026-01-17 00:08:50.822 [INFO][5942] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" Jan 17 00:08:50.859679 containerd[1672]: 2026-01-17 00:08:50.822 [INFO][5942] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" iface="eth0" netns="" Jan 17 00:08:50.859679 containerd[1672]: 2026-01-17 00:08:50.822 [INFO][5942] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" Jan 17 00:08:50.859679 containerd[1672]: 2026-01-17 00:08:50.822 [INFO][5942] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" Jan 17 00:08:50.859679 containerd[1672]: 2026-01-17 00:08:50.844 [INFO][5950] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" HandleID="k8s-pod-network.9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--kube--controllers--894f9f8d4--b5lgh-eth0" Jan 17 00:08:50.859679 containerd[1672]: 2026-01-17 00:08:50.844 [INFO][5950] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:08:50.859679 containerd[1672]: 2026-01-17 00:08:50.844 [INFO][5950] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:08:50.859679 containerd[1672]: 2026-01-17 00:08:50.853 [WARNING][5950] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" HandleID="k8s-pod-network.9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--kube--controllers--894f9f8d4--b5lgh-eth0" Jan 17 00:08:50.859679 containerd[1672]: 2026-01-17 00:08:50.854 [INFO][5950] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" HandleID="k8s-pod-network.9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--kube--controllers--894f9f8d4--b5lgh-eth0" Jan 17 00:08:50.859679 containerd[1672]: 2026-01-17 00:08:50.855 [INFO][5950] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:08:50.859679 containerd[1672]: 2026-01-17 00:08:50.857 [INFO][5942] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" Jan 17 00:08:50.859679 containerd[1672]: time="2026-01-17T00:08:50.859215596Z" level=info msg="TearDown network for sandbox \"9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca\" successfully" Jan 17 00:08:50.859679 containerd[1672]: time="2026-01-17T00:08:50.859619477Z" level=info msg="StopPodSandbox for \"9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca\" returns successfully" Jan 17 00:08:50.861353 containerd[1672]: time="2026-01-17T00:08:50.860801079Z" level=info msg="RemovePodSandbox for \"9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca\"" Jan 17 00:08:50.861353 containerd[1672]: time="2026-01-17T00:08:50.860832999Z" level=info msg="Forcibly stopping sandbox \"9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca\"" Jan 17 00:08:50.949070 containerd[1672]: 2026-01-17 00:08:50.895 [WARNING][5964] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--4c16a83c6c-k8s-calico--kube--controllers--894f9f8d4--b5lgh-eth0", GenerateName:"calico-kube-controllers-894f9f8d4-", Namespace:"calico-system", SelfLink:"", UID:"f59d9319-e335-4bfc-a026-d8bbe3696e81", ResourceVersion:"1361", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 7, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"894f9f8d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-4c16a83c6c", ContainerID:"b4cb5c82385090c5ce990a7dc7baf64e5f8e7a3972c7e3d5172ec19bc844939f", Pod:"calico-kube-controllers-894f9f8d4-b5lgh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali73bfb212120", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:08:50.949070 containerd[1672]: 2026-01-17 00:08:50.895 [INFO][5964] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" Jan 17 00:08:50.949070 containerd[1672]: 2026-01-17 00:08:50.895 [INFO][5964] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" iface="eth0" netns="" Jan 17 00:08:50.949070 containerd[1672]: 2026-01-17 00:08:50.895 [INFO][5964] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" Jan 17 00:08:50.949070 containerd[1672]: 2026-01-17 00:08:50.895 [INFO][5964] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" Jan 17 00:08:50.949070 containerd[1672]: 2026-01-17 00:08:50.929 [INFO][5971] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" HandleID="k8s-pod-network.9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--kube--controllers--894f9f8d4--b5lgh-eth0" Jan 17 00:08:50.949070 containerd[1672]: 2026-01-17 00:08:50.933 [INFO][5971] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:08:50.949070 containerd[1672]: 2026-01-17 00:08:50.934 [INFO][5971] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:08:50.949070 containerd[1672]: 2026-01-17 00:08:50.943 [WARNING][5971] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" HandleID="k8s-pod-network.9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--kube--controllers--894f9f8d4--b5lgh-eth0" Jan 17 00:08:50.949070 containerd[1672]: 2026-01-17 00:08:50.943 [INFO][5971] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" HandleID="k8s-pod-network.9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" Workload="ci--4081.3.6--n--4c16a83c6c-k8s-calico--kube--controllers--894f9f8d4--b5lgh-eth0" Jan 17 00:08:50.949070 containerd[1672]: 2026-01-17 00:08:50.944 [INFO][5971] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:08:50.949070 containerd[1672]: 2026-01-17 00:08:50.946 [INFO][5964] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca" Jan 17 00:08:50.949070 containerd[1672]: time="2026-01-17T00:08:50.948914927Z" level=info msg="TearDown network for sandbox \"9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca\" successfully" Jan 17 00:08:51.074400 containerd[1672]: time="2026-01-17T00:08:51.073993468Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:08:51.074400 containerd[1672]: time="2026-01-17T00:08:51.074081669Z" level=info msg="RemovePodSandbox \"9a9228ae4bdd8715f072170546335250445c5b147ac9cb2147c79bdcec8161ca\" returns successfully" Jan 17 00:08:52.108347 systemd[1]: Started sshd@12-10.200.20.43:22-10.200.16.10:47272.service - OpenSSH per-connection server daemon (10.200.16.10:47272). Jan 17 00:08:52.606769 sshd[5978]: Accepted publickey for core from 10.200.16.10 port 47272 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:08:52.608645 sshd[5978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:08:52.613726 systemd-logind[1652]: New session 15 of user core. Jan 17 00:08:52.618224 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:08:53.035908 sshd[5978]: pam_unix(sshd:session): session closed for user core Jan 17 00:08:53.042697 systemd[1]: sshd@12-10.200.20.43:22-10.200.16.10:47272.service: Deactivated successfully. Jan 17 00:08:53.042841 systemd-logind[1652]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:08:53.045446 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:08:53.047244 systemd-logind[1652]: Removed session 15. Jan 17 00:08:53.126310 systemd[1]: Started sshd@13-10.200.20.43:22-10.200.16.10:47278.service - OpenSSH per-connection server daemon (10.200.16.10:47278). Jan 17 00:08:53.572585 sshd[5993]: Accepted publickey for core from 10.200.16.10 port 47278 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:08:53.574272 sshd[5993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:08:53.580926 systemd-logind[1652]: New session 16 of user core. Jan 17 00:08:53.587234 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:08:53.679593 kubelet[3141]: E0117 00:08:53.679552 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57d7d85589-mrl7f" podUID="4d9310f4-1124-495b-a411-5323618ddd1d" Jan 17 00:08:54.679438 kubelet[3141]: E0117 00:08:54.679081 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-hg9nz" podUID="92047ce3-1e28-4b15-bb95-00e4947b1fab" Jan 17 00:08:54.812660 sshd[5993]: pam_unix(sshd:session): session closed for user core Jan 17 00:08:54.816924 systemd[1]: sshd@13-10.200.20.43:22-10.200.16.10:47278.service: Deactivated successfully. Jan 17 00:08:54.819646 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:08:54.825323 systemd-logind[1652]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:08:54.826818 systemd-logind[1652]: Removed session 16. Jan 17 00:08:54.904371 systemd[1]: Started sshd@14-10.200.20.43:22-10.200.16.10:47288.service - OpenSSH per-connection server daemon (10.200.16.10:47288). Jan 17 00:08:55.404821 sshd[6004]: Accepted publickey for core from 10.200.16.10 port 47288 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:08:55.406397 sshd[6004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:08:55.410453 systemd-logind[1652]: New session 17 of user core. Jan 17 00:08:55.417219 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:08:55.681402 kubelet[3141]: E0117 00:08:55.681197 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-fdd85dd66-d8cmt" podUID="f0b7656b-346b-4c7a-84f5-6afacf5c8b98" Jan 17 00:08:56.681919 kubelet[3141]: E0117 00:08:56.681859 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57d7d85589-ght5v" podUID="bcc0dcb5-6cc0-4aca-b131-0866d93b8e20" Jan 17 00:08:56.683686 kubelet[3141]: E0117 00:08:56.683646 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v4lqg" podUID="b1f66b76-7db3-449d-92fa-faa5ceccc08b" Jan 17 00:08:57.056670 sshd[6004]: pam_unix(sshd:session): session closed for user core Jan 17 00:08:57.061600 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:08:57.064396 systemd[1]: sshd@14-10.200.20.43:22-10.200.16.10:47288.service: Deactivated successfully. Jan 17 00:08:57.068113 systemd-logind[1652]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:08:57.069760 systemd-logind[1652]: Removed session 17. Jan 17 00:08:57.143973 systemd[1]: Started sshd@15-10.200.20.43:22-10.200.16.10:47292.service - OpenSSH per-connection server daemon (10.200.16.10:47292). Jan 17 00:08:57.634752 sshd[6022]: Accepted publickey for core from 10.200.16.10 port 47292 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:08:57.636763 sshd[6022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:08:57.641558 systemd-logind[1652]: New session 18 of user core. Jan 17 00:08:57.649272 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:08:57.678236 kubelet[3141]: E0117 00:08:57.678135 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-894f9f8d4-b5lgh" podUID="f59d9319-e335-4bfc-a026-d8bbe3696e81" Jan 17 00:08:58.244159 sshd[6022]: pam_unix(sshd:session): session closed for user core Jan 17 00:08:58.248604 systemd-logind[1652]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:08:58.249308 systemd[1]: sshd@15-10.200.20.43:22-10.200.16.10:47292.service: Deactivated successfully. Jan 17 00:08:58.254223 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:08:58.258290 systemd-logind[1652]: Removed session 18. Jan 17 00:08:58.324324 systemd[1]: Started sshd@16-10.200.20.43:22-10.200.16.10:47300.service - OpenSSH per-connection server daemon (10.200.16.10:47300). Jan 17 00:08:58.782097 sshd[6037]: Accepted publickey for core from 10.200.16.10 port 47300 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:08:58.783663 sshd[6037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:08:58.789366 systemd-logind[1652]: New session 19 of user core. Jan 17 00:08:58.796280 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:08:59.183275 sshd[6037]: pam_unix(sshd:session): session closed for user core Jan 17 00:08:59.187151 systemd[1]: sshd@16-10.200.20.43:22-10.200.16.10:47300.service: Deactivated successfully. Jan 17 00:08:59.189372 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:08:59.191576 systemd-logind[1652]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:08:59.192914 systemd-logind[1652]: Removed session 19. Jan 17 00:09:04.270647 systemd[1]: Started sshd@17-10.200.20.43:22-10.200.16.10:42280.service - OpenSSH per-connection server daemon (10.200.16.10:42280). Jan 17 00:09:04.771522 sshd[6051]: Accepted publickey for core from 10.200.16.10 port 42280 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:09:04.772716 sshd[6051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:09:04.778671 systemd-logind[1652]: New session 20 of user core. Jan 17 00:09:04.781252 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:09:05.179291 sshd[6051]: pam_unix(sshd:session): session closed for user core Jan 17 00:09:05.182335 systemd-logind[1652]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:09:05.182601 systemd[1]: sshd@17-10.200.20.43:22-10.200.16.10:42280.service: Deactivated successfully. Jan 17 00:09:05.184986 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:09:05.187645 systemd-logind[1652]: Removed session 20. Jan 17 00:09:06.683612 kubelet[3141]: E0117 00:09:06.682891 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57d7d85589-mrl7f" podUID="4d9310f4-1124-495b-a411-5323618ddd1d" Jan 17 00:09:08.681099 kubelet[3141]: E0117 00:09:08.680592 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-hg9nz" podUID="92047ce3-1e28-4b15-bb95-00e4947b1fab" Jan 17 00:09:08.684104 kubelet[3141]: E0117 00:09:08.683947 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v4lqg" podUID="b1f66b76-7db3-449d-92fa-faa5ceccc08b" Jan 17 00:09:09.680502 containerd[1672]: time="2026-01-17T00:09:09.680464295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:09:09.986143 containerd[1672]: time="2026-01-17T00:09:09.985878157Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:09:09.988530 containerd[1672]: time="2026-01-17T00:09:09.988426121Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:09:09.988641 containerd[1672]: time="2026-01-17T00:09:09.988444481Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:09:09.988710 kubelet[3141]: E0117 00:09:09.988669 3141 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:09:09.988973 kubelet[3141]: E0117 00:09:09.988715 3141 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:09:09.988973 kubelet[3141]: E0117 00:09:09.988880 3141 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-fdd85dd66-d8cmt_calico-system(f0b7656b-346b-4c7a-84f5-6afacf5c8b98): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:09:09.989516 containerd[1672]: time="2026-01-17T00:09:09.989427042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:09:10.261325 containerd[1672]: time="2026-01-17T00:09:10.261151338Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:09:10.263587 containerd[1672]: time="2026-01-17T00:09:10.263487501Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:09:10.263825 containerd[1672]: time="2026-01-17T00:09:10.263735501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:09:10.264037 kubelet[3141]: E0117 00:09:10.263990 3141 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:09:10.264121 kubelet[3141]: E0117 00:09:10.264056 3141 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:09:10.265674 kubelet[3141]: E0117 00:09:10.264262 3141 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-57d7d85589-ght5v_calico-apiserver(bcc0dcb5-6cc0-4aca-b131-0866d93b8e20): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:09:10.265674 kubelet[3141]: E0117 00:09:10.264303 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57d7d85589-ght5v" podUID="bcc0dcb5-6cc0-4aca-b131-0866d93b8e20" Jan 17 00:09:10.265830 containerd[1672]: time="2026-01-17T00:09:10.265460984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:09:10.267195 systemd[1]: Started sshd@18-10.200.20.43:22-10.200.16.10:44712.service - OpenSSH per-connection server daemon (10.200.16.10:44712). Jan 17 00:09:10.520951 containerd[1672]: time="2026-01-17T00:09:10.520705496Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:09:10.528942 containerd[1672]: time="2026-01-17T00:09:10.528814108Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:09:10.528942 containerd[1672]: time="2026-01-17T00:09:10.528912508Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:09:10.529150 kubelet[3141]: E0117 00:09:10.529088 3141 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:09:10.529150 kubelet[3141]: E0117 00:09:10.529136 3141 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:09:10.529216 kubelet[3141]: E0117 00:09:10.529204 3141 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-fdd85dd66-d8cmt_calico-system(f0b7656b-346b-4c7a-84f5-6afacf5c8b98): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:09:10.530548 kubelet[3141]: E0117 00:09:10.529247 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-fdd85dd66-d8cmt" podUID="f0b7656b-346b-4c7a-84f5-6afacf5c8b98" Jan 17 00:09:10.680072 kubelet[3141]: E0117 00:09:10.679934 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-894f9f8d4-b5lgh" podUID="f59d9319-e335-4bfc-a026-d8bbe3696e81" Jan 17 00:09:10.721065 sshd[6071]: Accepted publickey for core from 10.200.16.10 port 44712 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:09:10.721707 sshd[6071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:09:10.726939 systemd-logind[1652]: New session 21 of user core. Jan 17 00:09:10.733257 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:09:11.150126 sshd[6071]: pam_unix(sshd:session): session closed for user core Jan 17 00:09:11.154452 systemd[1]: sshd@18-10.200.20.43:22-10.200.16.10:44712.service: Deactivated successfully. Jan 17 00:09:11.159106 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:09:11.161171 systemd-logind[1652]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:09:11.162415 systemd-logind[1652]: Removed session 21. Jan 17 00:09:16.246445 systemd[1]: Started sshd@19-10.200.20.43:22-10.200.16.10:44722.service - OpenSSH per-connection server daemon (10.200.16.10:44722). Jan 17 00:09:16.737283 sshd[6107]: Accepted publickey for core from 10.200.16.10 port 44722 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:09:16.739971 sshd[6107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:09:16.744806 systemd-logind[1652]: New session 22 of user core. Jan 17 00:09:16.750409 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:09:17.174546 sshd[6107]: pam_unix(sshd:session): session closed for user core Jan 17 00:09:17.182260 systemd[1]: sshd@19-10.200.20.43:22-10.200.16.10:44722.service: Deactivated successfully. Jan 17 00:09:17.185911 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:09:17.187036 systemd-logind[1652]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:09:17.189196 systemd-logind[1652]: Removed session 22. Jan 17 00:09:19.679006 containerd[1672]: time="2026-01-17T00:09:19.678573673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:09:19.952321 containerd[1672]: time="2026-01-17T00:09:19.952193768Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:09:19.955327 containerd[1672]: time="2026-01-17T00:09:19.955251212Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:09:19.955327 containerd[1672]: time="2026-01-17T00:09:19.955293332Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:09:19.955538 kubelet[3141]: E0117 00:09:19.955461 3141 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:09:19.955538 kubelet[3141]: E0117 00:09:19.955506 3141 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:09:19.955863 kubelet[3141]: E0117 00:09:19.955580 3141 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-hg9nz_calico-system(92047ce3-1e28-4b15-bb95-00e4947b1fab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:09:19.955863 kubelet[3141]: E0117 00:09:19.955611 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-hg9nz" podUID="92047ce3-1e28-4b15-bb95-00e4947b1fab" Jan 17 00:09:21.680348 containerd[1672]: time="2026-01-17T00:09:21.680241132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:09:21.932325 containerd[1672]: time="2026-01-17T00:09:21.932085997Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:09:21.936379 containerd[1672]: time="2026-01-17T00:09:21.936265923Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:09:21.936379 containerd[1672]: time="2026-01-17T00:09:21.936321083Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:09:21.936914 kubelet[3141]: E0117 00:09:21.936698 3141 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:09:21.936914 kubelet[3141]: E0117 00:09:21.936747 3141 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:09:21.937254 kubelet[3141]: E0117 00:09:21.936934 3141 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-v4lqg_calico-system(b1f66b76-7db3-449d-92fa-faa5ceccc08b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:09:21.938340 containerd[1672]: time="2026-01-17T00:09:21.937564725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:09:22.222801 containerd[1672]: time="2026-01-17T00:09:22.221453353Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:09:22.224568 containerd[1672]: time="2026-01-17T00:09:22.224427317Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:09:22.224568 containerd[1672]: time="2026-01-17T00:09:22.224534197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:09:22.224732 kubelet[3141]: E0117 00:09:22.224691 3141 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:09:22.224795 kubelet[3141]: E0117 00:09:22.224741 3141 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:09:22.224936 kubelet[3141]: E0117 00:09:22.224905 3141 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-57d7d85589-mrl7f_calico-apiserver(4d9310f4-1124-495b-a411-5323618ddd1d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:09:22.224978 kubelet[3141]: E0117 00:09:22.224947 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57d7d85589-mrl7f" podUID="4d9310f4-1124-495b-a411-5323618ddd1d" Jan 17 00:09:22.226352 containerd[1672]: time="2026-01-17T00:09:22.226322800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:09:22.264650 systemd[1]: Started sshd@20-10.200.20.43:22-10.200.16.10:48070.service - OpenSSH per-connection server daemon (10.200.16.10:48070). Jan 17 00:09:22.497223 containerd[1672]: time="2026-01-17T00:09:22.497176650Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:09:22.502353 containerd[1672]: time="2026-01-17T00:09:22.502158137Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:09:22.502353 containerd[1672]: time="2026-01-17T00:09:22.502278937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:09:22.503373 kubelet[3141]: E0117 00:09:22.502450 3141 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:09:22.503373 kubelet[3141]: E0117 00:09:22.502507 3141 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:09:22.503373 kubelet[3141]: E0117 00:09:22.502702 3141 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-894f9f8d4-b5lgh_calico-system(f59d9319-e335-4bfc-a026-d8bbe3696e81): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:09:22.503373 kubelet[3141]: E0117 00:09:22.502764 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-894f9f8d4-b5lgh" podUID="f59d9319-e335-4bfc-a026-d8bbe3696e81" Jan 17 00:09:22.504342 containerd[1672]: time="2026-01-17T00:09:22.503927899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:09:22.721077 sshd[6141]: Accepted publickey for core from 10.200.16.10 port 48070 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:09:22.722091 sshd[6141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:09:22.726170 systemd-logind[1652]: New session 23 of user core. Jan 17 00:09:22.733261 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 00:09:22.761360 containerd[1672]: time="2026-01-17T00:09:22.761230932Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:09:22.763985 containerd[1672]: time="2026-01-17T00:09:22.763912335Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:09:22.764115 containerd[1672]: time="2026-01-17T00:09:22.764025855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:09:22.764239 kubelet[3141]: E0117 00:09:22.764197 3141 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:09:22.764317 kubelet[3141]: E0117 00:09:22.764243 3141 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:09:22.764583 kubelet[3141]: E0117 00:09:22.764318 3141 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-v4lqg_calico-system(b1f66b76-7db3-449d-92fa-faa5ceccc08b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:09:22.764583 kubelet[3141]: E0117 00:09:22.764359 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v4lqg" podUID="b1f66b76-7db3-449d-92fa-faa5ceccc08b" Jan 17 00:09:23.119395 sshd[6141]: pam_unix(sshd:session): session closed for user core Jan 17 00:09:23.122837 systemd[1]: sshd@20-10.200.20.43:22-10.200.16.10:48070.service: Deactivated successfully. Jan 17 00:09:23.125696 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 00:09:23.126639 systemd-logind[1652]: Session 23 logged out. Waiting for processes to exit. Jan 17 00:09:23.127767 systemd-logind[1652]: Removed session 23. Jan 17 00:09:23.682561 kubelet[3141]: E0117 00:09:23.682514 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-fdd85dd66-d8cmt" podUID="f0b7656b-346b-4c7a-84f5-6afacf5c8b98" Jan 17 00:09:25.680684 kubelet[3141]: E0117 00:09:25.680537 3141 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-57d7d85589-ght5v" podUID="bcc0dcb5-6cc0-4aca-b131-0866d93b8e20" Jan 17 00:09:28.230361 systemd[1]: Started sshd@21-10.200.20.43:22-10.200.16.10:48076.service - OpenSSH per-connection server daemon (10.200.16.10:48076). Jan 17 00:09:28.718738 sshd[6156]: Accepted publickey for core from 10.200.16.10 port 48076 ssh2: RSA SHA256:h9EzKM+OQiROMon03wb6yima4rGeMK2wJ6P2Si2QWb8 Jan 17 00:09:28.720370 sshd[6156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:09:28.726963 systemd-logind[1652]: New session 24 of user core. Jan 17 00:09:28.733289 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 00:09:29.135668 sshd[6156]: pam_unix(sshd:session): session closed for user core Jan 17 00:09:29.141212 systemd[1]: sshd@21-10.200.20.43:22-10.200.16.10:48076.service: Deactivated successfully. Jan 17 00:09:29.143228 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 00:09:29.144202 systemd-logind[1652]: Session 24 logged out. Waiting for processes to exit. Jan 17 00:09:29.145020 systemd-logind[1652]: Removed session 24.